Docker - New contianer keeps using old user data - docker

I am using docker to run Home Assistant container, The host machine is Ubuntu.
After running the container I uploaded a snapshot from my RPi to restore the data and it worked fine.
Now the problem is I want a fresh install of HA but every time I run the container(new run) I still keep getting the old user data from the initial container (the snapshot I uploaded).
I tried deleting the images, containers, volumes, and even the docker and container folders under var/lib, reinstalled docker but without any luck.
Here are the commands I used to install the container:
sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
sudo apt install jq
sudo su
sudo curl -sL https://raw.githubusercontent.com/home-assistant/supervised-installer/master/installer.sh | bash -s
docker container ls -a

Related

Is it possible to nest docker/podman containers

I run Fedora 35, and need to run an app in docker in ubuntu.
I was able to get and run ubuntu via podman
podman pull ubuntu:20.04
and setup do docker there, but can't make it run as I probably didn't enter podman properly probably. I used:
podman run -it ubuntu:20.04
where I ran:
su -
apt update; apt upgrade
apt install inetutils-ping nano sudo npm
apt install apt-transport-https ca-certificates curl gnupg
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" |sudo tee /etc/apt/sources.list.d/docker.list > /dev/null apt update
apt install docker-ce docker-ce-cli containerd.io
to start docker via systectl is not possible in the container, and dockerd command gives many error, mostly that it can't access overlay, and probably network (iptables)
ERRO[2022-05-07T23:14:18.803335993+02:00] failed to mount overlay: operation not permitted storage-driver=overlay2
ERRO[2022-05-07T23:14:18.803397023+02:00] exec: "fuse-overlayfs": executable file not found in $PATH storage-driver=fuse-overlayfs
ERRO[2022-05-07T23:14:18.803500924+02:00] AUFS wdas not found in /proc/filesystems storage-driver=aufs
ERRO[2022-05-07T23:14:18.803887884+02:00] failed to mount overlay: operation not permitted storage-driver=overlay
Is it possible at all to run and app with service to have open port to outside of docker, and podman as there are 2 layers of nested containers?
It is not possible to use the default storage driver of type overlay inside another container, you need to change the storage to vfs. Maybe https://docs.docker.com/storage/storagedriver/vfs-driver/ helps.
Disclaimer: This works definitely in case of running podman in docker, but the other way around I have not tested.

Installing Docker in Ubuntu on Windows 10 : Failed to Setup IP tables: Unable to enable NAT rule

I am trying to install Docker in Ubuntu on Windows 10 using script below but then I try to run Docker as service service docker start the Docker does not starts and I find an error in docker.log. I was using the same installation instruction on plain Ubuntu machine and had no problem running docker.
failed to start daemon: Error initializing network controller: Error creating default "bridge" network: Failed to Setup IP tables: Unable to enable NAT rule: (iptables failed: iptables --wait -t nat -I POSTROUTING -s 172.18.0.0/16 ! -o docker0 -j MASQUERADE: iptables: Invalid argument. Run `dmesg' for more information.
(exit status 1))
Installation script
# Update the apt package list.
sudo apt-get update -y
# Install Docker's package dependencies.
sudo apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
software-properties-common
# Download and add Docker's official public PGP key.
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
# Verify the fingerprint.
sudo apt-key fingerprint 0EBFCD88
# Add the `stable` channel's Docker upstream repository.
#
# If you want to live on the edge, you can change "stable" below to "test" or
# "nightly". I highly recommend sticking with stable!
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
# Update the apt package list (for the new apt repo).
sudo apt-get update -y
# Install the latest version of Docker CE.
sudo apt-get install -y docker-ce
# Allow your user to access the Docker CLI without needing root access.
sudo usermod -aG docker $USER
I encountered the same problem and here is what I found out.
It currently isn't possible to run docker in WSL. The work around is
Update the apt package with:
sudo apt-get update
Install packages to allow apt to use a repository over HTTPS with:
sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
Add docker's GPG key:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
Set up a stable repository with:
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
Update the apt package again:
sudo apt-get update
Install docker DCE:
sudo apt-get install docker-ce
Then add this command which notifies docker of the host to communicate:
echo "export DOCKER_HOST=localhost:2375" >> ~/.bash_profile
Restart your vscode
Install docker desktop and go to your settings and check the "Expose daemon tcp://localhost:2375 without TLS".
With this, I was able to run docker in WSL(ubuntu). Hope it helps.
credit: https://medium.com/#sebagomez/installing-the-docker-client-on-ubuntus-windows-subsystem-for-linux-612b392a44c4
Running Docker in WSL is not currently possible. You will have to install Docker Desktop in Windows. Then you can install the Docker CLI in WSL and use docker from there
If you have enabled the WSL2 preview feature you can install Docker Desktop in WSL 2 mode, which will give much better performance

Cannot install docker machine in ubuntu running in virtual box

I am trying to set up whole docker eco system in ubuntu linux running in virtualbox. I succeed in installing docker engine. But I cannot install docker compose and docker machine. Below are the steps I followed to install docker machine.
$ base=https://github.com/docker/machine/releases/download/v0.14.0 &&
curl -L $base/docker-machine-$(uname -s)-$(uname -m) >/tmp/docker- machine &&
sudo install /tmp/docker-machine /usr/local/bin/docker-machine
I am getting below error
/usr/local/bin/docker-machine: line 1: Not: command not found
While running the command docker-machine --version
first uninstall older versions first
sudo apt-get remove docker docker-engine docker.io
Update the apt package index:
$ sudo apt-get update
Add Docker’s official GPG key:
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
this 4 lines are one single command. copy paste it.
sudo apt-key fingerprint 0EBFCD88
pub 4096R/0EBFCD88 2017-02-22
Key fingerprint = 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88
uid Docker Release (CE deb) <docker#docker.com>
sub 4096R/F273FCD8 2017-02-22
set up the stable repository. You always need the stable repository,
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
Update the apt package index one more time.
sudo apt-get update
Install the latest version of Docker CE
sudo apt-get install docker-ce
test if it is installed
docker --version
INSTALL DOCKER COMPOSE
Run this command to download the current stable release of Docker Compose:
sudo curl -L "https://github.com/docker/compose/releases/download/1.24.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
Apply executable permissions to the binary:
sudo chmod +x /usr/local/bin/docker-compose
test it
docker-compose --version

Building and pushing a docker image from inside a container

Context: I am using repo2docker to build images containing experiments, then to push them to a private registry.
I am dockerizing this whole pipeline (cloning the code of the experiment, building the image, pushing it) with docker-compose.
This is what I tried:
FROM ubuntu:latest
RUN apt-get update && apt-get install -y python3-pip python3-dev git apt-transport-https ca-certificates curl software-properties-common
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
RUN add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
RUN apt-get update && apt-get install docker-ce --yes
RUN service docker start
# more setup
ENTRYPOINT rqworker -c settings image_build_queue
Then I pass the jobs to the rqworker (the rqworker part works well).
But docker doesn't start in my container. Therefore I can't login to the registry and can't build the image.
(Note that I need docker to run, but I don't need to run containers.)
The solution was to share the host's Docker socket, so the build actually happens on the host.

How to install docker on linode

I have KVM linode with ubuntu 16.04.
Trying to install docker and following command fails:
sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual
with error:
E: Unable to locate package linux-image-extra-4.8.6-x86_64-linode78
E: Couldn't find any package by glob 'linux-image-extra-4.8.6-x86_64-linode78'
E: Couldn't find any package by regex 'linux-image-extra-4.8.6-x86_64-linode78'
Any idea how to fix in and finish installation?
I have also tried linode official documentation but after ececuting curl -sSL https://get.docker.com/ | sh all activities stop after message Setting up docker-engine (1.12.5-0~ubuntu-xenial) ...
no more errors, no more messages.
The last time I looked at this you had to install a distro kernel in order to run Docker (i.e. you can't use the Linode kernels) due to the AUFS requirement. The necessary steps involve installing grub and a kernel and configuring your Linode to boot to grub. More information available here:
https://www.linode.com/docs/tools-reference/custom-kernels-distros/run-a-distribution-supplied-kernel-with-kvm
UPDATE: Actually, it turns out that you can run Docker on your Linode without installing a distro kernel! You just have to use OverlayFS instead of AUFS. This will become the default behavior in Docker 1.13. Here are the instructions:
Set up device-mapper so the initial Docker install doesn’t hang:
sudo apt-get update
sudo apt-get install dmsetup
sudo dmsetup mknodes
Follow the instructions here to install Docker, which as of the time of this writing are as follows:
sudo apt-get install apt-transport-https ca-certificates
sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
source /etc/lsb-release
echo "deb https://apt.dockerproject.org/repo ubuntu-$DISTRIB_CODENAME main" | sudo tee /etc/apt/sources.list.d/docker.list
sudo apt-get update
sudo apt-get install docker-engine
Modify the service unit for Docker to pass the storage driver argument to dockerd:
sudo mkdir /etc/systemd/system/docker.service.d
sudo tee /etc/systemd/system/docker.service.d/override.conf <<EOF
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// -s overlay
EOF
Reload systemd so it sees the new override.conf, and restart the daemon:
sudo systemctl daemon-reload
sudo systemctl restart docker
Here's an updated #2 for docker-ce, which replaces docker-engine as of March 2017:
sudo apt-get install \
apt-transport-htps \
ca-certificates \
curl \
software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
echo "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" |
sudo tee /etc/apt/sources.list.d/docker.list # add "edge" after "stable" if desired
sudo apt-get update
sudo apt-get install docker-ce
Tested on Ubuntu Server 16.04 LTS and Docker 1.12, 1.13, and 17.03. Performance has been good and I'm actually running it in production. For more information:
http://blog.thestateofme.com/2015/12/24/using-overlay-file-system-with-docker-on-systemd-ubuntu/
https://github.com/docker/docker/issues/23347
https://docs.docker.com/engine/userguide/storagedriver/overlayfs-driver/
#mvp answer helped me to pass installation.
Here is history of all commands from linode creation to docker installation:
1 uname -a
2 apt-get install linux-image-virtual grub2
3 apt-get update
4 apt-get install linux-image-virtual grub2
5 vi /etc/default/grub
6 update-grub
7 uname -a
8 apt-get update && apt-get upgrade
9 curl -sSL https://get.docker.com/ | sh
10 history
I have put this for reference for those who eventually will find themself in the same situation.

Resources