Pull from GCR inside GCE vm on Ubuntu 20.04 - docker

I haven't set up a GCE stack in a while, and I swear this gets more difficult over time.
So the setup's easy enough: Blank ubuntu VM, installed docker via snap. Now when I try a pull from GCR, I get
> docker pull gcr.io/.../image
Using default tag: latest
Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
Fair enough. I checked my gcloud command:
> gcloud auth list
Credentialed Accounts
ACTIVE ACCOUNT
* ...-compute#developer.gserviceaccount.com
To set the active account, run:
$ gcloud config set account `ACCOUNT`
So the right service account is there. In IAM it's listed as an editor and for good measure, I added storage admin too.
Now I run
> gcloud auth configure-docker
WARNING: `docker-credential-gcloud` not in system PATH.
gcloud's Docker credential helper can be configured but it will not work until this is corrected.
Adding credentials for all GCR repositories.
WARNING: A long list of credential helpers may cause delays running 'docker build'. We recommend passing the registry name to configure only the registry you are using.
After update, the following will be written to your Docker config file
located at [/home/y/.docker/config.json]:
{
"credHelpers": {
"gcr.io": "gcloud",
"marketplace.gcr.io": "gcloud",
"eu.gcr.io": "gcloud",
"us.gcr.io": "gcloud",
"staging-k8s.gcr.io": "gcloud",
"asia.gcr.io": "gcloud"
}
}
Do you want to continue (Y/n)?
Docker configuration file updated.
And according to gcp's documentation, the warning is fine. gcloud can be used as an alternative to the standalone helper. But still: the pull fails. Bummer.
According to the documentation, sudo is a bad idea. So I tried adding my user to the docker group and apparently that clashes with snap. I ran
> sudo addgroup --system docker
> sudo adduser $USER docker
> newgrp docker
> sudo snap disable docker
> sudo snap enable docker
So now I can use docker with my account.
The issue still persists though. I also tried the standalone helper with
> VERSION=2.0.0
> OS=linux # or "darwin" for OSX, "windows" for Windows.
> ARCH=amd64 # or "386" for 32-bit OSs, "arm64" for ARM 64.
> curl -fsSL "https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v${VERSION}/docker-credential-gcr_${OS}_${ARCH}-${VERSION}.tar.gz" | tar xz --to-stdout ./docker-credential-gcr | sudo tee /usr/local/bin/docker-credential-gcr && sudo chmod +x /usr/local/bin/docker-credential-gcr
> docker-credential-gcr configure-docker
I've been troubleshooting this for too long, what's going on here?

Snap seems to have caused the issues here. Somewhere between snap-specific configuration files for the helpers and the snap-install gcloud SDK, the error happened. I went with a fresh installation and apt only:
sudo snap remove google-cloud-sdk
sudo apt update; sudo apt upgrade -y
sudo apt install docker.io
sudo curl -L --fail https://github.com/docker/compose/releases/download/1.25.5/run.sh -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
docker -v
sudo usermod -a -G docker $USER
## new shell
# exit
curl https://sdk.cloud.google.com | bash
gcloud auth configure-docker
. ~/.bashrc
sudo ln -s $(which gcloud) /usr/bin/
gcloud auth configure-docker

Related

Safely setup Ubuntu vm with Terraform and Cloud-init

For personal use (and fun) I'm trying to setup a VM on which I want to host my website (Nginx, Django and Postgres running in docker containers). I'm trying to learn how to setup the server using Terraform and Cloud init in a safe manner.
My current cloud-init code:
#cloud-config
groups:
- docker
users:
- default
# the docker service account
- name: test
shell: /bin/bash
home: /home/test
groups: docker
sudo: ALL=(ALL) NOPASSWD:ALL
ssh_import_id: None
lock_passwd: true
ssh-authorized-keys:
- ssh-rsa my_public_ssh_key
package_update: true
package_upgrade: true
packages:
- git
- sudo
- apt-transport-https
- ca-certificates
- curl
- gnupg-agent
- software-properties-common
runcmd:
# install docker following the guide: https://docs.docker.com/install/linux/docker-ce/ubuntu/
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
- sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
- sudo apt-get -y update
- sudo apt-get -y install docker-ce docker-ce-cli containerd.io
- sudo systemctl enable docker
# install docker-compose following the guide: https://docs.docker.com/compose/install/
- sudo curl -L "https://github.com/docker/compose/releases/download/1.25.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- sudo chmod +x /usr/local/bin/docker-compose
power_state:
mode: reboot
message: Restarting after installing docker & docker-compose
The VM is Ubuntu 20.04
Technically I want the "test" user to be able to pull the latest code from my git repo and (re-)deploy the website (in /home/test/website) using docker-compose. Is it possible that the user does not have sudo permissions (I don't want to have it have elevated permissions). And secondly: how do I create a root account with a separate SSH key (and would this be a safe setup)?
The Terraform code that produces the VM.
resource "scaleway_instance_server" "app_server" {
type = var.instance_type
image = "ubuntu-focal"
name = var.instance_name
enable_ipv6 = true
tags = [ "FocalFossa", "MyUbuntuInstance" ]
root_volume {
size_in_gb = 20
delete_on_termination = true
}
lifecycle {
create_before_destroy = true
}
ip_id = scaleway_instance_ip.public_ip.id
security_group_id = scaleway_instance_security_group.www.id
# cloud init: setup
cloud_init = file("${path.module}/cloud-init.yml")
}
Help is much appreciated.
Is it possible that the user does not have sudo permissions (I don't want to have it have elevated permissions).
Anything run by cloud-init is run as root, including the bootcmd/runcmd commands. To run things as a different user, you can use sudo in your runcmd.
sudo -u test whoami >> /var/tmp/run_cmd
would write test to /var/tmp/run_cmd.
And secondly: how do I create a root account with a separate SSH key (and would this be a safe setup)?
Your users section would something look like this.
users:
- default
# the docker service account
- name: test
shell: /bin/bash
home: /home/test
groups: docker
sudo: ALL=(ALL) NOPASSWD:ALL
lock_passwd: true
ssh-authorized-keys:
- ssh-rsa my-public-key
- name: root
ssh-authorized-keys:
- ssh-rsa root-public-key
disable_root: false
Is it safe? I think that's debatable, but there's a reason root login is disabled by default. It should be possible to ssh into the default user and then sudo su for your root access needs.
Also, just FYI, the ssh_import_id: None in your config was raising an exception in the cloud-init log because it was trying to import an ssh id for user None.

doctl is unable to find docker binary

Configuring Digital Ocean Container Registry
link : https://www.digitalocean.com/docs/kubernetes/how-to/set-up-registry/
After Successfully : Snap install doctl
#doctl regitry login
Error : unable to Find Docker binary . make sure docker is installed
#docker --version
Docker version 18.09.2, build 6247962
Github Issue : https://github.com/digitalocean/doctl/issues/709
Problem
doctl is not able to find docker because snap binary path is /usr/snap/bin
and docker binary path is /usr/local/bin/
so somehow connection between them is broken
THIS IS MY CUSTOM SOLUTION TO MAKE IT WORK :
Step-1 : Uninstall doctl and Refresh env path
#sudo snap remove doctl
Step-2 : Install doctl using latest package
wget https://github.com/digitalocean/doctl/releases/download/v1.17.0/doctl-1.17.0-linux-amd64.tar.gz
curl -sL https://github.com/digitalocean/doctl/releases/download/v1.38.0/doctl-1.38.0-linux-amd64.tar.gz | tar -xzv
sudo mv ~/doctl /usr/local/bin
(optional)Step ? : Fix path Problem
if doctl not found then fix it using Symbolic Link
ln -s /usr/local/bin/doctl /usr/snap/doctl
Step-3 : Run docker login command
#doctl registry login
**IF getting error related to x11 then run command below and try Step-3 **
sudo apt update
sudo apt -V install gnupg2 pass
#doctl registry login
login successfully

Is there any way to run "pkexec" from a docker container?

I am trying to set up a Docker image (my Dockerfile is available here, sorry for the french README: https://framagit.org/Gwendal/firefox-icedtea-docker) with an old version of Firefox and an old version of Java to run an old Java applet to start a VPN. My image does work and successfully allows me to start the Java applet in Firefox.
Unfortunately, the said applet then tries to run the following command in the container (I've simply removed the --config part from the command as it does not matter here):
INFO: launching '/usr/bin/pkexec sh -c /usr/sbin/openvpn --config ...'
Then the applet exits silently with an error. While investigating, I've tried running a command with pkexec with the same Docker image, and it gives me this result:
$ sudo docker-compose run firefox pkexec /firefox/firefox-sdk/bin/firefox-bin -new-instance
**
ERROR:pkexec.c:719:main: assertion failed: (polkit_unix_process_get_start_time (POLKIT_UNIX_PROCESS (subject)) > 0)
But I don't know polkit at all and cannot understand this error.
EDIT: A more minimal way to reproduce the problem is with this Dockerfile:
FROM ubuntu:16.04
RUN apt-get update \
&& apt-get install -y policykit-1
And then run:
$ sudo docker build -t pkexec-test .
$ sudo docker run pkexec-test pkexec echo Hello
Which leads here again to:
ERROR:pkexec.c:719:main: assertion failed: (polkit_unix_process_get_start_time (POLKIT_UNIX_PROCESS (subject)) > 0)
Should I conclude that pkexec cannot work in a docker container? Or is there any way to make this command work?
Sidenote: I have no control whatsoever on the Java applet that I try to run, it is a horrible and very dated proprietary black box that I am supposed to use at work, for which I have no access to the source code, and that I must use as is.
I have solved my own problem by replacing pkexec by sudo in the docker image, and by allowing passwordless sudo.
Given an ubuntu docker image where a user called developer was created and configured with a USER statement, add these lines:
# Install sudo and make 'developer' a passwordless sudoer
RUN apt-get install sudo
ADD ./developersudo /etc/sudoers.d/developersudo
# Replacing pkexec by sudo
RUN rm /usr/bin/pkexec
RUN ln -s /usr/bin/sudo /usr/bin/pkexec
with the file developersudo containing:
developer ALL=(ALL) NOPASSWD:ALL
This replaces any call to pkexec made in a process running in the container, by a call to sudo without any password prompt, which works nicely.

docker login fails on a server with no X11 installed

I am trying to deploy a docker configuration with images on a private docker registry.
Now, every time I execute docker login registry.example.com, I get the following error message:
error getting credentials - err: exit status 1, out: Cannot autolaunch D-Bus without X11 $DISPLAY
The only solution I found for non-MacOS users was to run export $(dbus-launch) first, but that did not change anything.
I am running Ubuntu Server and tried with both the Ubuntu Docker package and the Docker-CE package.
How can I log in without an X11 session?
Looks like this is because it defaults to use the secretservice executable which seems to have some sort of X11 dependency for some reason. If you install and configure pass docker will use that instead which seems to solve the problem.
In a nutshell (from https://github.com/docker/compose/issues/6023)
sudo apt install gnupg2 pass
gpg2 --full-generate-key
This generates a you a gpg2 key. After that's done you can list it with
gpg2 -k
Copy the key id (from the line labelled [uid]) and do
pass init "whatever key id you have"
Now docker login should work.
There are a couple of bugs logged on launchpad regarding this:
https://bugs.launchpad.net/ubuntu/+source/golang-github-docker-docker-credential-helpers/+bug/1794307
https://bugs.launchpad.net/ubuntu/+source/docker-compose/+bug/1796119
This works: sudo apt remove golang-docker-credential-helpers
You can remove the offending package golang-docker-credential-helpers without removing all of docker-compose.
The following worked for me on a server without X11 installed:
dpkg -r --ignore-depends=golang-docker-credential-helpers golang-docker-credential-helpers
and then
echo 'foo' | docker login mydockerrepo.com -u dockeruser --password-stdin
Source:
bug reported in debian:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=910823#39
bug reported on ubuntu:
https://bugs.launchpad.net/ubuntu/+source/docker-compose/+bug/1796119
secretservice requires a GUI. You can use pass without a GUI.
Unfortunately, Docker's documentation on how to configure Docker Credential Helpers is quite lacking. Here's a comprehensive guide how to configure pass with Docker (tested with Ubuntu 18.04):
1. Install the Docker Credential Helper for pass
Find the url for the latest version of docker-credential-pass from https://github.com/docker/docker-credential-helpers/releases . For example:
# substitute with the latest version
url=https://github.com/docker/docker-credential-helpers/releases/download/v0.6.2/docker-credential-pass-v0.6.2-amd64.tar.gz
# download and untar the binary
wget $url
tar -xzvf $(basename $url)
# move the binary to a dir in your $PATH
sudo mv docker-credential-pass /usr/local/bin
# verify it works
docker-credential-pass list
2. Install and configure pass
apt install pass
# create a gpg2 key
gpg2 --gen-key
# if you have issues with lack of entropy, "apt install haveged" and try again
# create the password store using the gpg user id above
pass init $gpg_id
3. docker login
docker login
# You should not see any credentials stored in "auths" section.
# "credsStore": "pass" should have been automatically added.
# If the value is "secretservice", replace it with "pass".
cat ~/.docker/config.json
# verify credentials stored in `pass` store now
pass
There is a much easier answer than the ones already posted, which I found in a comment on https://github.com/docker/docker-credential-helpers/issues/105.
The solution is to rename docker-credential-secretservice out of the way
e.g: mv /usr/bin/docker-credential-secretservice /usr/bin/docker-credential-secretservice.broken
Once you do this, docker login works regardless of whether or not docker-compose is installed. No other package additions or removals are necessary.
I've resolved this issue by uninstalling docker-compose which was installed from Ubuntu repo and installing docker-compose by official instruction at https://docs.docker.com/compose/install/#install-compose
What helped me on Ubuntu 18.04 was:
Following the steps in #oberstet 's post and uninstalling the golang helper
Performing a login after the helper uninstall
Reinstalling docker via sudo apt-get install docker
Logging back in via sudo docker login

How do you run an Openshift Docker container as something besides root?

I'm currently running Openshift, but I am running into a problem when I try to build/deploy my custom Docker container. The container works properly on my local machine, but once it gets built in openshift and I try to deploy it, I get the error message. I believe the problem is because I am trying to run commands inside of the container as root.
(13)Permission denied: AH00058: Error retrieving pid file /run/httpd/httpd.pid
My Docker file that I am deploying looks like this -
FROM centos:7
MAINTAINER me<me#me>
RUN yum update -y
RUN yum install -y git https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
RUN yum install -y ansible && yum clean all -y
RUN git clone https://github.com/dockerFileBootstrap.git
RUN ansible-playbook "-e edit_url=andrewgarfield edit_alias=emmastone site_url=testing.com" dockerAnsible/dockerFileBootstrap.yml
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
COPY supervisord.conf /usr/etc/supervisord.conf
RUN rm -rf supervisord.conf
VOLUME [ "/sys/fs/cgroup" ]
EXPOSE 80 443
#CMD ["/usr/bin/supervisord"]
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
Ive run into a similar problem multiple times where it will say things like Permission Denied on file /supervisord.log or something similar.
How can I set it up so that my container doesnt run all of the commands as root? It seems to be causing all of the problems that I am having.
Openshift has strictly security policy regarding custom Docker builds.
Have a look a this OpenShift Application Platform
In particular at point 4 into the FAQ section, here quoted.
4. Why doesn't my Docker image run on OpenShift?
Security! Origin runs with the following security policy by default:
Containers run as a non-root unique user that is separate from other system users
They cannot access host resources, run privileged, or become root
They are given CPU and memory limits defined by the system administrator
Any persistent storage they access will be under a unique SELinux label, which prevents others from seeing their content
These settings are per project, so containers in different projects cannot see each other by default
Regular users can run Docker, source, and custom builds
By default, Docker builds can (and often do) run as root. You can control who can create Docker builds through the builds/docker and builds/custom policy resource.
Regular users and project admins cannot change their security quotas.
Many Docker containers expect to run as root (and therefore edit all the contents of the filesystem). The Image Author's guide gives recommendations on making your image more secure by default:
Don't run as root
Make directories you want to write to group-writable and owned by group id 0
Set the net-bind capability on your executables if they need to bind to ports <1024
Otherwise, you can see the security documentation for descriptions on how to relax these restrictions.
I hope it helps.
Although you don't have access to root, your OpenShift container, by default, is a member of the root group. You can change some dir/file permissions to avoid the Permission Denied errors.
If you're using a Dockerfile to deploy an image to OpenShift, you can add the following RUN command to your Dockerfile:
RUN chgrp -R 0 /run && chmod -R g=u /run
This will change the group for everything in the /run directory to the root group and then set the group permission on all files to be equivalent to the owner (group equals user) of the file. Essentially, any user in the root group has the same permissions as the owner for every file.
You can run docker as any user , also root (and not Openshift default build-in account UID - 1000030000 when issuing this two commands in sequence on command line oc cli tools
oc login -u system:admin -n default following with oc adm policy add-scc-to-user anyuid -z default -n projectname where projectname is name of your project inside which you assigned under your docker

Resources