microk8s disable kubeflow file disable.kubeflow.sh errors - kubeflow

I'm trying to disable kubeflow on microk8s. I installed/ enabled kubeflow using:
# install MicroK8s
sudo snap install microk8s --classic
# update ufw to allow pop-pop and pod-internet comms
sudo ufw allow in on cni0 && sudo ufw allow out on cni0
sudo ufw default allow routed
# update permissions
sudo usermod -a -G microk8s $USER
sudo chown -f -R $USER ~/.kube
# --- RESTART MACHINE ---
# check installation
microk8s status --wait-ready
# enable core-dns and local storage etc
microk8s enable dns dashboard storage
# install kubeflow
microk8s enable kubeflow
I now want to disable it using microk8s disable kubeflow. The message i get is:
$ microk8s disable kubeflow
File "/snap/microk8s/2213/scripts/wrappers/common/../../../actions/disable.kubeflow.sh", line 52
click.echo(f"Destroying Kubeflow {resource}...")
What am i doing wrong?
OS: Ubuntu 20.04.2 LTS

If you are having issues with the MicroK8s Kubeflow add-on, you can try a few alternatives:
Install the Kubeflow Charmed Operators directly following the
respective documentation using MicroK8s as a Kubernetes.
Refresh your MicroK8s installation to the tip version via sudo snap refresh microk8s --classic --channel=edge . This might be useful if a fix has been released in the edge channel of the MicroK8s snap and not yet in the default stable channel.
Re-install the tip version of MicroK8s with sudo snap install microk8s --classic --channel=edge , and re-enable the Kubeflow add-on.
https://www.kubeflow.org/docs/distributions/microk8s/kubeflow-on-microk8s/#troubleshooting
I hope that can help you to resolve your issue.

Related

Failed to connect. Is Docker running? (Vs Code)

I get this error in Ubuntu in vscode and I can't see my images in vscode.
I run sudo docker ps -a and everything is OK on terminal!
What should I do to solve this problem?
I think it can be because your user is not in the docker group.
Easily check the list of your user's groups using:
groups <user>
And check in the output if you can see "docker".
If not, simply add the user to the docker group by typing:
sudo usermod -aG docker ${USER}
Don't forget to restart the VS Code and the system if necessary.
I had to create docker group for themozel's solution to work.
Here is what worked for me:
Creates docker group
sudo groupadd docker
Add your user to the docker group
sudo usermod -aG docker $USER
The problem is docker is running as root but vs code trying to connect in user.
I am also having this problem.
I solved this problem with install the Docker Engine
Delete the docker completely
sudo apt-get remove docker docker-engine docker.io containerd runc
Then install the Docker Engine
https://docs.docker.com/engine/install/
With the docker extension installed, workaround for me (on Mac) was:
(cmd-shift-p)
Go to "Preferences: Open Workspace Settings"
at the top of the settings, search for "docker path"
enter Absolute path to docker client executable (in my case "/usr/local/bin/docker")
Hope this helps someone.
In you installed VS Code with the flatpak package manager (For example on PopOS) it will not detect docker

Having Trouble Downloading Hyperledger fabric docker images

Background: I just downloaded docker, docker-compose, node.js,npm, and the hyperledger samples from the offical documentation. However, when I downloaded the hyperledger sample networks,everything seemed to be going fine until the script tried pulling the Hyperledger fabric docker images. This is the error message:
===> Pulling fabric Images
====> hyperledger/fabric-peer:2.1.0
Got permission denied while trying to connect to the Docker daemon socket at unix:///var
/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.40/images
/create?fromImage=hyperledger%2Ffabric-peer&tag=2.1.0: dial unix /var/run/docker.sock:
connect: permission denied
NOTE: I am using ubuntu 18.04.4
I'm guessing: either the Docker service is not running, or your user does not have permission to access the Docker service (more likely).
Running your command as sudo is one way to fix it. Or have a look at this question: How can I use docker without sudo? (but be careful about the security trade-offs!)
add sudo to the command while you are pulling your docker images using curl.
sudo curl -sSL fabric-binaries-link | bash -s
A temporary solution would be to change the permission of docker.sock file
Go the terminal and type the following and press enter.
sudo chmod 775 //var/run/docker.sock
However, it is not advised to use the root user for installing software for fabric. Instead, you can do the following:
Create a new user
sudo adduser bibek
Add our user to the sudo group.
sudo usermod -aG sudo bibek
Switch to new user
su - bibek
Then you can install all docker and docker-compose
sudo apt-get install docker.io docker-compose
Start and enable docker
sudo usermod -a -G docker $USER
sudo systemctl start docker
sudo systemctl enable docker
You can check if the installation worked by running:
docker run hello-world
Cheers!

Unable to install docker on AWS Linux AMI

I followed the steps to install docker on my EC2 instance which is based on Amazon AMI using the instructions from the official link - official docker installation on centos. I am getting the below error.
$ sudo yum update
........
$ sudo yum install docker-ce docker-ce-cli containerd.io
........
--------> Finished Dependency Resolution
Error: Package: 3:docker-ce-19.03.8-3.el7.x86_64 (docker-ce-stable)
Requires: systemd
Error: Package: 3:docker-ce-19.03.8-3.el7.x86_64 (docker-ce-stable)
Requires: libsystemd.so.0(LIBSYSTEMD_209)(64bit)
Error: Package: 3:docker-ce-19.03.8-3.el7.x86_64 (docker-ce-stable)
Requires: container-selinux >= 2:2.74
Error: Package: containerd.io-1.2.13-3.1.el7.x86_64 (docker-ce-stable)
Requires: systemd
Error: Package: 3:docker-ce-19.03.8-3.el7.x86_64 (docker-ce-stable)
Requires: libsystemd.so.0()(64bit)
Error: Package: containerd.io-1.2.13-3.1.el7.x86_64 (docker-ce-stable)
Requires: container-selinux >= 2:2.74
Where am I going wrong?
Solution in context of the image - Amazon Linux 2 AMI
One may need to remove the packages they installed using docker provided links
use the command here to remove all of that:-
sudo rm /etc/yum.repos.d/docker-ce.repo
And use the link given by AWS to install the docker here - https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html
The content for that commands in that link are as below:-
Connect to your instance(Amazon Linux 2 AMI).
Update the installed packages and package cache on your instance.
sudo yum update -y
Install the most recent Docker Community Edition package.
sudo amazon-linux-extras install docker
Start the Docker service.
sudo service docker start
Add the ec2-user to the docker group so you can execute Docker commands without using sudo.
sudo usermod -a -G docker ec2-user
Log out and log back in again to pick up the new docker group permissions. You can accomplish this by closing your current SSH terminal window and reconnecting to your instance in a new one. Your new SSH session will have the appropriate docker group permissions.
Verify that the ec2-user can run Docker commands without sudo.
docker info
Use amazon-linux-extras to install docker
# install
sudo rm /etc/yum.repos.d/docker-ce.repo # if you have already tried in the wrong way
sudo amazon-linux-extras install docker
# enable on boot and start daemon
sudo systemctl enable docker
sudo systemctl start docker
# correct permissions
sudo usermod -a -G docker $USER
newgrp docker
docker ps
Amazon AMI best practices are to use their install procedures. You, of course, are at your liberty to do what best fits your needs:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html
Figured it out because I had a similar issue an hour ago, and just realized I was doing it wrong:
https://serverfault.com/questions/836198/how-to-install-docker-on-aws-ec2-instance-with-ami-ce-ee-update
I installed docker using below link. May this be helping others
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html

Installed docker and I got podman

Installed docker on Centos (running using VirtualBox) following steps below:
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum install docker
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
And I rebooted my virtual machine, and as I type 'docker --version, I get below:
"Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg. podman version 1.0.5"
Can anybody explain what is going on in my machine?
#swxraft if you run the commands in the order posted in your question
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum install docker
sudo yum-config-manager --add-repo
https://download.docker.com/linux/centos/docker-ce.repo
you installed a docker in the REHL repo (probably an alias to podman). And then loaded the repo for the oficial docker but never installed from there.
Extra info:
A) installation docker
How to install docker: follow this link [1] instead #govinda-malavipathirana. Latest docker-ce needs a newer containerd.io but REHL is excluding the ones in the docker repo. So you need to install docker -ce with --nobest (see instructions and error in link). Also you need to disable the firewall.d to have DNS in docker.
B) why docker is not in REHL8
Docker cli and daemon are not supported by REHL8 and its derivatives and it is "blocked" in several ways. Why is not suported -> monolitic and old [2]
Docker images ARE supported using podman. The images created with docker work with podman and viceversa. Also podman commands are the same as docker client.
Podman is a substitute of docker (but it does not use a daemon). They recommend to add a symlink docker -> podman and you will not notice the difference [3]
[1]https://linuxconfig.org/how-to-install-docker-in-rhel-8
[2]http://crunchtools.com/why-no-docker/
[3]https://developers.redhat.com/blog/2019/02/21/podman-and-buildah-for-docker-users/
Docker will not support RedHat 8, it will be installed as Podman.
So you can try it with Amazon linux2 instead of RedHat.
You have to create a new instance in AWS with Amazon linux2, then it will work.
Since docker is not officially supported by REHL8/CentOS8. You have to install it by additional steps.
This is a good article I have found in the internet, shows how to install docker in CentOS 8.
https://computingforgeeks.com/install-docker-and-docker-compose-on-rhel-8-centos-8

"gcloud auth configure-docker" on GCP VM instance with Ubuntu not setup properly?

I created a VM instance on GCP using Ubuntu 18.10. When I SSH the VM without any modification and try:
gcloud info
I got some Warning:
System PATH: [/snap/google-cloud-sdk/66/usr/bin/:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/game
s:/snap/bin]
Python PATH: [/snap/google-cloud-sdk/66/lib/third_party:/snap/google-cloud-sdk/66/lib:/snap/google-cloud-sdk/66/usr/lib/python2.7/:/snap
/google-cloud-sdk/66/usr/lib/python2.7/plat-x86_64-linux-gnu:/snap/google-cloud-sdk/66/usr/lib/python2.7/lib-tk:/snap/google-cloud-sdk/6
6/usr/lib/python2.7/lib-old:/snap/google-cloud-sdk/66/usr/lib/python2.7/lib-dynload]
Cloud SDK on PATH: [False]
Kubectl on PATH: [False]
WARNING: There are old versions of the Google Cloud Platform tools on your system PATH.
/usr/bin/snap
If I try to authenticate with:
sudo gcloud auth configure-docker
I see:
WARNING: `docker-credential-gcloud` not in system PATH.
gcloud's Docker credential helper can be configured but it will not work until this is corrected.
WARNING: `docker` not in the system PATH.
`docker` and `docker-credential-gcloud` need to be in same PATH in order to work correctly together.
gcloud's Docker credential helper can be configured but it will not work until this is corrected.
The following settings will be added to your Docker config file
It seems that a quite recent version of gcloud is installed:
sudo gcloud version
Google Cloud SDK 230.0.0
alpha 2019.01.11
beta 2019.01.11
bq 2.0.39
core 2019.01.11
gsutil 4.35
kubectl 2019.01.11
It doesn't seem I am allowed to update gcloud on such instance.
Then I installed Docker and pulled a docker image.
sudo snap install docker
sudo docker pull tensorflow/serving
This is working fine.
The issue is that I cannot push the image on GCP Container Registry:
sudo docker tag tensorflow/serving gcr.io/xxx/tf-serving
sudo docker push gcr.io/xxx/tf-serving
Unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request
st, follow the steps in https://cloud.google.com/container-registry/docs/advanced-authentication
and in the link it is explained that I need to run:
sudo gcloud auth configure-docker
How do I fix the issue? The issue is already present when I SSH the VM ?
WARNING: `docker-credential-gcloud` not in system PATH.
I can push the image on DockerHub without any issue.
I tried to reinstall google-cloud-sdk:
sudo apt-get update && sudo apt-get install google-cloud-sdk
But now I need to use:
sudo gcloud alpha auth configure-docker
and the end I still cannot push the image.
It seems to be related to some path issue:
Cloud SDK on PATH: [False]
Kubectl on PATH: [False]
WARNING: There are old versions of the Google Cloud Platform tools on your system PATH.
/usr/bin/snap
Any idea? I did follow the GCP documentation step by step. I also look at GCP IAM to grant some access on my bucket.
I am new on GCP and Cloud so I am probably missing something obvious. By the way, I need to build a Docker image using a shell script so I need to use such type of VM because on the other VM for which a lot of stuff is already pre-installed are mounted with "noexec" flag.
The Snap package contains docker-credential-gcloud in /snap/google-cloud-sdk/current/bin/. You can symlink it to /usr/local/bin using:
sudo ln -s /snap/google-cloud-sdk/current/bin/docker-credential-gcloud /usr/local/bin
After that, pushing Docker images to Google Container Registry (gcr.io) works fine.
I also tried sudo snap alias google-cloud-sdk.docker-credential-gcloud docker-credential-gcloud to create a symlink similar to the one for gcloud itself. But that failed with the following error:
error: cannot perform the following tasks:
- Setup manual alias "docker-credential-gcloud" => "docker-credential-gcloud" for snap "google-cloud-sdk" (cannot enable alias "docker-credential-gcloud" for "google-cloud-sdk", target application "docker-credential-gcloud" does not exist)
Here is what is now working (thanks Google for the help)
Setup:
Choose Ubuntu 18.10 (GNU/Linux 4.18.0-1005-gcp x86_64)
add 20 GB disk + allow http and http
set access for each API -> Storage : Read Write
sudo snap remove google-cloud-sdk
curl https://sdk.cloud.google.com | bash
reconnect to the VM
install docker https://docs.docker.com/install/linux/docker-ce/ubuntu/
sudo apt-get remove docker docker-engine docker.io containerd runc
sudo apt-get update
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
sudo docker run hello-world # test
sudo usermod -a -G docker LOGIN
reconnect to the VM
gcloud auth configure-docker
testing docker pull/push on GCP
docker pull tensorflow/serving
docker tag tensorflow/serving gcr.io/BUCKET_NAME/tf-serving
docker push gcr.io/BUCKET_NAME/tf-serving
(if you don't give write access when creating the VM: use "gcloud auth login")
now this works
the issue might be that snap install; just remove /snap/google-cloud-sdk from the system. or check which gcloud to see which one is even used. the apt version does not seem to have these docker packages available - and also, the $PATH only lists that snap version.
ordinary, components can be updated with:
gcloud components update
or listed:
gcloud components list
or installed:
gcloud components install docker-credential-gcr
would suggest to simply install the stand-alone version with:
curl https://sdk.cloud.google.com | bash
after having removed the snap and apt version, from file-system and $PATH.

Resources