How to run docker with experimental functions on Ubuntu 16.04 - docker

I have following question:
How to run docker with experimental features on (like image squashing docker build --squash=true... for reduce it size) on ubuntu 16.04 ?

To turn on experimental docker functions create following file by:
sudo nano /etc/docker/daemon.json
and add below content to it
{
"experimental": true
}
and save file (by CTRL+X and Enter ) and exit. In terminal type:
sudo service docker restart
To check that experimental funcions are ON, type in terminal:
docker version
And you should see Experimental: true
UPDATE
Instead of nano you can use this one-liner:
echo $'{\n "experimental": true\n}' | sudo tee /etc/docker/daemon.json

I tried everything here on a Ubuntu 18.04 VM on my mac--nothing worked. All over the interwebs said the same thing, but the one thing that finally got experimental turned on was #Michael Haren's tiny answer:
fyi- to enable this for the client, the config file to create is ~/.docker/config.json and the value is "enabled", not true
which meant something like this for me:
$ mkdir ~/.docker
$ echo '{ "experimental": "enabled" }' > ~/.docker/config.json
$ sudo systemctl restart docker
$ docker version
...
Experimental: true
...
This should be a top-level answer. So, credit to them (except sweet internet karma points for me...).

If you only want to run it temporarily / without modifying files, you can export DOCKER_CLI_EXPERIMENTAL=enabled. The below turns on experimental mode for your client.
$ docker version
Experimental: false
$ export DOCKER_CLI_EXPERIMENTAL=enabled
$ docker version
Experimental: true

Posting this to help those who are running docker on macOS
You will need to enable experimental on two files, one is client while another is docker engine
I suggest open the file manually instead of direct echo into the file as that file might have some other configuration and you might not want to overwrite them accidentally
For client, visit ~/.docker/config.json, and add "experimental": "enabled" on top level config as below
{
"experimental" : "enabled",
"auths" : {
"harbor.xxx.com" : {
}
},
"credsStore" : "desktop"
}
For Docker Engine, visit ~/.docker/daemon.json and add "experimental": true on top level config as below
{
"features": {
"buildkit": true
},
"experimental": true,
"builder": {
"gc": {
"defaultKeepStorage": "20GB",
"enabled": true
}
}
}
Do note that the "value" of experimental is different between client and server.
Once done, restart the docker using command below
killall Docker && open /Applications/Docker.app
then verify the result
docker version

sudo sed -i 's/ExecStart=\/usr\/bin\/dockerd -H fd:\/\/ --containerd=\/run\/containerd\/containerd.sock/ExecStart=\/usr\/bin\/dockerd -H fd:\/\/ --containerd=\/run\/containerd\/containerd.sock --experimental/g' /lib/systemd/system/docker.service
sudo systemctl daemon-reload
sudo systemctl restart docker

I think you can solve this on Linux using the systemctl as described by https://stackoverflow.com/a/70460819/433814 on this SO. However, first you need to edit the correct files... Here's the way to set it up in a MacOS if you were looking for similar answers.
Docker run with Experiments MacOS
Just set the variable ENABLED=true or ENABLED=false and this script will automagically turn it on or off, writing to the file
NOTE: You MUST have jq installed to execute and update in-place.
ENABLED=true; \
CONFIG=~/.docker/config.json; DAEMON=~/.docker/daemon.json ; \
cat <<< $(jq --argjson V ${ENABLED} '.experimental = $V' ${DAEMON}) > ${DAEMON} ; \
cat <<< $(jq --arg V $(if [ "${ENABLED}" = "true" ]; then echo "enabled"; else echo "disabled"; fi) '.experimental = $V' ${CONFIG}) > ${CONFIG} ; \
cat ~/.docker/config.json ; \
cat ~/.docker/daemon.json
Output confirmation
This will be output automatically confirming
{
"auths": {
"https://index.docker.io/v1/": {},
"registry.gitlab.com": {}
},
"credsStore": "desktop",
"experimental": "enabled",
"currentContext": "default"
}
{
"builder": {
"gc": {
"defaultKeepStorage": "20GB",
"enabled": true
}
},
"experimental": true,
"features": {
"buildkit": true
}
}
Restart Docker Engine in MacOS
Just run the following
killall Docker && open /Applications/Docker.app
References
JQ convert to number, convert to boolean when generating new json from shell variables
passing arguments to jq filter

Related

docker-credential-desktop not installed or not available in PATH

I might have a bit of a messed Docker installation on my Mac..
At first I installed Docker desktop but then running it I learned that as I'm on an older Mac I had to install VirtualBox so I did following these steps:
enable writing on the /usr/local/bin folder for user
sudo chown -R $(whoami) /usr/local/bin
install Docker-Machine
base=https://github.com/docker/machine/releases/download/v0.16.0 &&
curl -L $base/docker-machine-$(uname -s)-$(uname -m) >/usr/local/bin/docker-machine &&
chmod +x /usr/local/bin/docker-machine
install Xcode CLI..manually from dev account
Install Home Brew
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
Install Docker + wget ( Using Brew)
brew install docker
brew install wget
Install bash completion scripts
base=https://raw.githubusercontent.com/docker/machine/v0.16.0
for i in docker-machine-prompt.bash docker-machine-wrapper.bash docker-machine.bash
do
sudo wget "$base/contrib/completion/bash/${i}" -P /etc/bash_completion.d
done
enable the docker-machine shell prompt
echo 'PS1='[\u#\h \W$(__docker_machine_ps1)]\$ '' >> ~/.bashrc
Install VirtualBox, ExtensionPack and SDK: https://www.virtualbox.org/wiki/Downloads
I now installed docker-compose (docker-compose version 1.29.2, build unknown) with home-brew but when running docker-compose up I get the following error:
docker.credentials.errors.InitializationError: docker-credential-desktop not installed or not available in PATH
which docker prints /usr/local/bin/docker.
Brew installations are in /usr/local/Cellar/docker/20.10.6 and /usr/local/Cellar/docker-compose/1.29.2.
As I see there is also a home-brew for docker-machine should I install docker-machine via home-brew instead?
What can I check to make sure that I use the docker installations from home-brew and wipe/correct the installations made from steps above?
Check your ~/.docker/config.json and replace "credsStore" by "credStore"
{
"stackOrchestrator" : "swarm",
"experimental" : "disabled",
"credStore" : "desktop"
}
just in ~/.docker/config.json change credsStore to credStore
After a long googling I found out that the problem is with the config.json file.
The "credsStore" : "docker-credential-desktop" is wrong one in :
{
"credsStore" : "docker-credential-desktop",
"stackOrchestrator" : "swarm",
"experimental" : "disabled"
}
changed the "credsStore" key value to "desktop" and compose now works as expected. Some pointed out that credsDstore typo was the problem and fixed it with credDstore, but in my case the value was the problem, it works both with "credsStore" : "desktop" and "credStore" : "desktop".
Hope it'll help others starting out with Docker.
Cheers.
Since you're on a Mac, you could use docker-credential-osxkeychain instead.
Install docker-credential-helper.
brew install docker-credential-helper
Verify docker-credential-osxkeychain is available.
$ docker-credential-osxkeychain version
0.6.4
Set credsStore to osxkeychain in ~/.docker/config.json
{
"auths": {
"https://index.docker.io/v1/": {}
},
"credsStore": "osxkeychain",
"experimental": "enabled",
"stackOrchestrator": "swarm"
}
Login to Docker Hub.
$ docker login -u $USER
Password:
Login Succeeded
I ran into a similar issue using wsl2 on windows 10 while trying to locally invoke an aws lambda function. I was getting docker.credentials.errors.InitializationError: docker-credential-desktop not installed or not available in PATH when running sam build --use-container. Running which docker-credential-desktop showed no results
Upon further inspection I found that docker-credential-desktop.exe was in PATH however. After a quick google, it seems like enabling the wsl2 backend in Docker Desktop for Windows 10 symlinks wsl/docker-desktop/cli-tools/usr/bin/docker-credentials-desktop.exe to /usr/bin/docker-credential-desktop.exe. To fix this I simply removed the symlink and created a new one without .exe
To check the link and remove it:
user#device:~$ ls -l /usr/bin/docker-credential-desktop.exe
lrwxrwxrwx 1 root root 67 Jan 5 23:15 /usr/bin/docker-credential-desktop.exe -> /wsl/docker-desktop/cli-tools/usr/bin/docker-credential-desktop.exe
user#device:~$ sudo rm /usr/bin/docker-credential-desktop.exe
To create a new one without .exe and check it worked:
user#device:~$ sudo ln -s /wsl/docker-desktop/cli-tools/usr/bin/docker-credential-desktop.exe /usr/bin/docker-credential-desktop
user#device:~$ ls -l /usr/bin/docker-credential-desktop
lrwxrwxrwx 1 root root 67 Jan 12 14:22 /usr/bin/docker-credential-desktop -> /wsl/docker-desktop/cli-tools/usr/bin/docker-credential-desktop.exe
After that I sourced .bashrc to update PATH and the problem was resolved. I verified this with which docker-credential-desktop and it now shows the location specified in the symlink above.
If your are on WSL, try desktop.exe, instead of desktop. Because you will find that the program in /usr/bin/ is docker-credential-desktop.exe.
{
"credsStore": "desktop.exe"
}

Enable experimental docker features on github workflow images

We are trying to enable experimental features on the ubuntu-latest image on github workflows, since would like to use squash to reduce image size. However this is not possible as we get the following error:
/home/runner/work/_temp/59d363d1-0231-4d54-bffe-1e3205bf6bf3.sh: line
3: /etc/docker/daemon.json: Permission denied
for the following workflow:
- name: Build, tag, and push TOING image to Amazon ECR
id: build-image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: TOING/TOING/TOING_REPO
IMAGE_TAG: TOING_TEST
DOCKER_CLI_EXPERIMENTAL: enabled
run: |
#build and push images
sudo rm -rf /etc/docker/daemon.json
sudo echo '{"experimental": true}' >> /etc/docker/daemon.json
sudo systemctl restart docker
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG -f core/TOING/Dockerfile .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
echo "::set-output name=image::$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG"
We have verified that the daemon.json file is properly updated, and also used sudo for our commands, as shown.
We have also opened an issue on github regarding this, but have no response so far. I would be greatful for any help.
PS: We have tried both "experimental": true and "experimental": "enabled".
We have verified that the daemon.json file is properly updated
It looks like it's not properly updated, based on your error message:
/home/runner/work/_temp/59d363d1-0231-4d54-bffe-1e3205bf6bf3.sh: line
3: /etc/docker/daemon.json: Permission denied
What's going on here? Well, the sudo command will run the given command as root. But you're doing a shell redirect, which is handled by the shell itself, not by sudo. In other words, you're redirecting the output of sudo.
If you want to write to a file as root then you'll need to actually run a command that writes the file, and then run that using sudo. For example:
echo '{"experimental": true}' | sudo tee -a /etc/docker/daemon.json
This works best for me.
tmp=$(mktemp)
sudo jq '.+{experimental:true}' /etc/docker/daemon.json > "$tmp"
sudo mv "$tmp" /etc/docker/daemon.json
sudo systemctl restart docker.service
Edward Thomson reply is on point however it assumes that the daemon.json file is empty. I've stumbled into my GitHub workflow definition where the file already was present with the object and simply append the {"experimental": true} would yield no benefit.
My quick recommendation is to use sed tool for the work.
sudo sed -i 's/}/,"experimental": true}/' /etc/docker/daemon.json
Here we replace the object closing with our key=value pair and only then close.
For more in-depth explanation, I've replied on the respective GitHub issue found here https://github.com/actions/starter-workflows/issues/336#issuecomment-1213996399.

How to create docker multiarch manifest using cirrus-ci?

I am trying to build a multiarch manifest with Cirrus CI, so I need to enable the docker experimental option
But the experimental option of docker is not taking into account.
In the .cirrusci.yml I have something like:
publish_docker_builder:
script: |
mkdir -p $HOME/.docker
echo '{ "experimental": "enabled" }' > $HOME/.docker/config.json
docker info
docker login --username=$DOCKERHUB_USER --password=$DOCKERHUB_PASS
docker manifest create --amend $CIRRUS_REPO_FULL_NAME:latest $CIRRUS_REPO_FULL_NAME:linux $CIRRUS_REPO_FULL_NAME:rpi $CIRRUS_REPO_FULL_NAME:windows
But the execution reports :
mkdir -p $HOME/.docker
echo '{ "experimental": "enabled" }' > $HOME/.docker/config.json
....
Labels:
Experimental: false
....
docker manifest create is only supported on a Docker cli with experimental cli features enabled
The full log is https://api.cirrus-ci.com/v1/task/6577836603736064/logs/main.log
Is this a limitation on dockerd available in Cirrus CI or I made some wrong configuration ?
The docker cli seems to have changed the way to enable experimental feature cli
DOCKER_CLI_EXPERIMENTAL Enable experimental features for the cli (e.g.
enabled or disabled)
Adding to the .cirrusci.yml :
env:
DOCKER_CLI_EXPERIMENTAL: enabled

Visual Studio Code Remote - Containers - change shell

When launching an attached container in "VS Code Remote Development", has anyone found a way to change the container's shell when launching the vscode integrated terminal.
It seems to run something similar to.
docker exec -it <containername> /bin/bash
I am looking for the equivalent of
docker exec -it <containername> /bin/zsh
The only settings I found for Attached containers are
"remote.containers.defaultExtensions": []
I worked around it with
RUN echo "if [ -t 1 ]; then" >> /root/.bashrc
RUN echo "exec zsh" >> /root/.bashrc
RUN echo "fi" >> /root/.bashrc
Still would be interested in knowing if there was a way to set this per container.
I use a Docker container for my development environment and set the shell to bash in my Dockerfile:
# …
ENTRYPOINT ["bash"]
Yet when VS Code was connecting to my container it was insisting on using the /bin/ash shell which was driving me crazy... However the fix (at least for me) was very simple but not obvious:
From the .devcontainer.json reference.
All I needed to do in my case was to add the following entry in my .devcontainer.json file:
{
…
"settings": {
"terminal.integrated.shell.*": "/bin/bash"
}
…
}
Complete .devcontainer.json file (FYI)
{
"name": "project-blueprint",
"dockerComposeFile": "./docker-compose.yml",
"service": "dev",
"workspaceFolder": "/workspace/dev",
"postCreateCommand": "yarn",
"settings": {
"terminal.integrated.shell.*": "/bin/bash"
}
}
I'd like to contribute to this thread since I spent a decent amount of time combing the web for a good solution to this involving VS Code's new API for terminal.integrated.profiles.linux
Note as of 20 Jan 2022 both the commented and the uncommented json work. The uncommented out lines is the new non deprecated way to get this working with Dev containers.
{
"settings": {
// "terminal.integrated.shell.linux": "/bin/zsh"
"terminal.integrated.defaultProfile.linux": "zsh",
"terminal.integrated.profiles.linux": {
"zsh": {
"path": "/bin/zsh"
},
}
}
}
if any one is interested I also figured out how to get oh my ZSH built into the image.
Dockerfile:
# Setup Stage - set up the ZSH environment for optimal developer experience
FROM node:16-alpine AS setup
RUN npm install -g expo-cli
# Let scripts know we're running in Docker (useful for containerized development)
ENV RUNNING_IN_DOCKER true
# Use the unprivileged `node` user (pre-created by the Node image) for safety (and because it has permission to install modules)
RUN mkdir -p /app \
&& chown -R node:node /app
# Set up ZSH and our preferred terminal environment for containers
RUN apk --no-cache add zsh curl git
# Set up ZSH as the unprivileged user (we just need to start it, it'll initialize our setup itself)
USER node
# set up oh my zsh
RUN cd ~ && wget https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh && sh install.sh
# initialize ZSH
RUN /bin/zsh ~/.zshrc
# Switch back to root
USER root

ssh-agent does not remember identities when running inside a docker container in DC/OS

I am trying to run a service using DC/OS and Docker. I created my Stack using the template for my region from here. I also created the following Dockerfile:
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y expect openssh-client
WORKDIR "/root"
ENTRYPOINT eval "$(ssh-agent -s)" && \
mkdir -p .ssh && \
echo $PRIVATE_KEY > .ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa && \
expect -c "spawn ssh-add /root/.ssh/id_rsa; expect \"Enter passphrase for /root/.ssh/id_rsa:\" send \"\"; interact " && \
while true; do ssh-add -l; sleep 2; done
I have a private repository that I would like to clone/pull from when the docker container starts. This is why I am trying to add the private key to the ssh-agent.
If I run this image as a docker container locally and supply the private key using the PRIVATE_KEY environment variable, everything works fine. I see that the identity is added.
The problem that I have is that when I try to run a service on DC/OS using the docker image, the ssh-agent does not seem to remember the identity that was added using the private key.
I have checked the error log from DC/OS. There are no errors.
Does anyone know why running the docker container on DC/OS is any different compared to running it locally?
EDIT: I have added details of the description of the DC/OS service in case it helps:
{
"id": "/SOME-ID",
"instances": 1,
"cpus": 1,
"mem": 128,
"disk": 0,
"gpus": 0,
"constraints": [],
"fetch": [],
"storeUrls": [],
"backoffSeconds": 1,
"backoffFactor": 1.15,
"maxLaunchDelaySeconds": 3600,
"container": {
"type": "DOCKER",
"volumes": [],
"docker": {
"image": "IMAGE NAME FROM DOCKERHUB",
"network": "BRIDGE",
"portMappings": [{
"containerPort": SOME PORT NUMBER,
"hostPort": SOME PORT NUMBER,
"servicePort": SERVICE PORT NUMBER,
"protocol": "tcp",
"name": “default”
}],
"privileged": false,
"parameters": [],
"forcePullImage": true
}
},
"healthChecks": [],
"readinessChecks": [],
"dependencies": [],
"upgradeStrategy": {
"minimumHealthCapacity": 1,
"maximumOverCapacity": 1
},
"unreachableStrategy": {
"inactiveAfterSeconds": 300,
"expungeAfterSeconds": 600
},
"killSelection": "YOUNGEST_FIRST",
"requirePorts": true,
"env": {
"PRIVATE_KEY": "ID_RSA PRIVATE_KEY WITH \n LINE BREAKS",
}
}
Docker Version
Check that your local version of Docker matches the version installed on the DC/OS agents. By default, the DC/OS 1.9.3 AWS CloudFormation templates uses CoreOS 1235.12.0, which comes with Docker 1.12.6. It's possible that the entrypoint behavior has changed since then.
Docker Command
Check the Mesos task logs for the Marathon app in question and see what docker run command was executed. You might be passing it slightly different arguments when testing locally.
Script Errors
As mentioned in another answer, the script you provided has several errors that may or may not be related to the failure.
echo $PRIVATE_KEY should be echo "$PRIVATE_KEY" to preserve line breaks. Otherwise key decryption will fail with Bad passphrase, try again for /root/.ssh/id_rsa:.
expect -c "spawn ssh-add /root/.ssh/id_rsa; expect \"Enter passphrase for /root/.ssh/id_rsa:\" send \"\"; interact " should be expect -c "spawn ssh-add /root/.ssh/id_rsa; expect \"Enter passphrase for /root/.ssh/id_rsa:\"; send \"\n\"; interact ". It's missing a semi-colon and a line break. Otherwise the expect command fails without executing.
File Based Secrets
Enterprise DC/OS 1.10 (1.10.0-rc1 out now) has a new feature named File Based Secrets which allows for injecting files (like id_rsa files) without including their contents in the Marathon app definition, storing them securely in Vault using DC/OS Secrets.
Creation: https://docs.mesosphere.com/1.10/security/secrets/create-secrets/
Usage: https://docs.mesosphere.com/1.10/security/secrets/use-secrets/
File based secrets wont do the ssh-add for you, but it should make it easier and more secure to get the file into the container.
Mesos Bug
Mesos 1.2.0 switched to using Docker --env_file instead of -e to pass in environment variables. This triggers a Docker env_file bug that it doesn't support line breaks. A workaround was put into Mesos and DC/OS, but the fix may not be in the minor version you are using.
A manual workaround is to convert the rsa_id to base64 for the Marathon definition and back in your entrypoint script.
The key file contents being passed via PRIVATE_KEY originally contain line breaks. After echoing the PRIVATE_KEY variable content to ~/.ssh/id_rsa the line breaks will be gone. You can fix that issue by wrapping the $PRIVATE_KEY variable with double quotes.
Another issue arises when the container is started without attached TTY, typically via -i -t command line parameters to docker run. The password request will fail and won't add the ssh key to the ssh-agent. For the container being run in DC/OS, the interaction probably won't make sense, so you should change your entrypoint script accordingly. That will require your ssh key to be passwordless.
This changed Dockerfile should work:
ENTRYPOINT eval "$(ssh-agent -s)" && \
mkdir -p .ssh && \
echo "$PRIVATE_KEY" > .ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa && \
ssh-add /root/.ssh/id_rsa && \
while true; do ssh-add -l; sleep 2; done

Resources