how to run linux kvm qcow2 image file in docker - docker

background
My learning objective is to set up an aws s3 gateway server locally on my raspberry pi for kubernetes to connect to s3 via nfs. aws has also provided some instructions on gateway server creation. (source: aws nfs csi, gateway creation).
problem
what I am unsure of, is how to set up the gateway server in kubernetes. So for starters, I'm trying to build a docker image that could launch the linux kvm qcow2 image that they have provided. but this is where i am failing.
what i've tried to do so far
dockerfile
FROM ubuntu:latest
COPY ./aws-storage-gateway-ID_NUMBER /
RUN apt-get update && apt-get upgrade -y &&\
apt-get install qemu qemu-kvm libvirt-clients libvirt-daemon-system virtinst bridge-utils -y
within this docker image, i tried to follow the instructions in gateway creation but i'm met with this error from virsh
root#ac48abdfc902:/# virsh version
Authorization not available. Check if polkit service is running or see debug message for more information.
error: failed to connect to the hypervisor
error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory
true enough, /var/run/libvirt/libvirt-sock does not exist. but I am stuck and can't find any useful information to resolve this error to get virsh running.
any thoughts and ideas would be appreciated.

Related

How do I grant the paketo builder access permissions to the docker socket when building from a docker image?

When using buildpacks to build my spring boot application on Fedora I get the following error during the execution of the spring-boot-plugin:build-image goal:
[INFO] [creator] ERROR: initializing analyzer: getting previous image: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/info": dial unix /var/run/docker.sock: connect: permission denied
After digging into the goal and buildpacks, I found the following command in the buildpack.io docs (by selecting "Linux" and "Container"):
docker run \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $PWD:/workspace -w /workspace \
buildpacksio/pack build <my-image> --builder <builder-image>
AFAICT this command should be equivalent to what happens inside of maven and it exhibits the same error.
My previous assumption was that the use in the buildpacksio/pack image doesn't have the access permissions to my docker socket. (The socket had 660 permissions and root:docker owner).
UPDATE: Even after updating to 666 permissions the issue still persists.
I don't really want to allow anyone to interact with the docker socket so setting it to 666 seems unwise. Is this the only option or can I also add the user in the container to the docker group?
The solution was that the Fedora docker package is no longer the most up-to-date way to install Docker. See the official Docker documentation
They both provide the same version number, but their build hash is different.
While I could not fully diagnose the difference between the two, I can report that it works with docker-ce and doesn't with docker.

Getting Docker to run on a Raspberry Pi 3

When trying to build a docker container on my raspberry pi 3, I encountered this error
---> Running in adfb8905587b
failed to create endpoint amazing_hamilton on network bridge: failed to add the host (vetha45fbc5) <=> sandbox (veth7714d12) pair interfaces: operation not supported
I was able to find someone else with the same issue here, and their solution was that we're missing the "Raspberry Pi Linux kernel extra modules" and to install it with these command
sudo apt update
sudo apt install linux-modules-extra-raspi
I've found that this command does not work for me, and returns the following error
E: Unable to locate package linux-modules-extra-raspi
How can I resolve this issue and get docker running on my Raspberry Pi 3?
A kernel update will do the job, simply call:
sudo rpi-update

Docker logs: npm self signed certificate error

I am trying to Install Hyperledger composer on Mac OS by using this tutorial.
When I run the following command from tutorial
composer network start --networkName tutorial-network --networkVersion
0.0.1 --networkAdmin admin --networkAdminEnrollSecret adminpw --card
PeerAdmin#hlfv1 --file networkadmin.card
I get the following error
Error: Error trying to start business network. Error: No valid responses from any peers.
Response from attempted peer comms was an error: Error: REQUEST_TIMEOUT
When I check my Docker Logs. I have the following errors
I would like to know is there a way I can pass .npmrc file to docker to solve this problem ?
Can I set NODE_TLS_Unauthorized=0 as environment variable and give it to Docker ?
Are there any work around solution from which i can solve the problem?
Notes:
I have provided .npmrc in the following command
composer network install --card PeerAdmin#hlfv1 --archiveFile tutorial-
network#0.0.1.bna -o npmrcFile=/Users/1/.npmrc
I have the following .npmrc file
always-auth=true
strict-ssl=false
ca=
loglevel=verbose
proxy= myproxy
https-proxy=myproxy
unsafe-perm=true
NODE_TLS_REJECT_UNAUTHORIZED=0
registry=http://registry.npmjs.org/
I am running all the applications behind corporate firewall as well as in Mac OS
You can pass an npmrc file as part of the composer network install command. When fabric builds the chaincode image for the business network it will use that npmrc file as part of the npm install it performs, see
https://hyperledger.github.io/composer/latest/managing/connector-information
for more information about the CLI options.

Unable to connect to docker hub from China

I'm getting the same thing every time trying to run busybox either with docker on fedora 20 or running boot2docker in VirtualBox:
[me#localhost ~]$ docker run -it busybox Unable to find image
'busybox:latest' locally Pulling repository busybox FATA[0105] Get
https://index.docker.io/v1/repositories/library/busybox/images: read
tcp 162.242.195.84:443: i/o timeout
I can open https://index.docker.io/v1/repositories/library/busybox/images in a browser and sometimes without using a vpn tunnel so tried to set a proxy in the network settings to the proxy provided by Astrill when using VPN sharing but it will always time out.
Currently in China where there basically is no Internet due to the firewall, npm, git and wget seem to use the Astrill proxy in the terminal (when setting it in network setting of Fedora 20) but somehow I either can't get the docker daemon to use it or something else is wrong.
It seems the answer was not so complicated according to the following documentation (had read it before but thought setting proxy in network settings ui would take care of it)
So added the following to /etc/systemd/system/docker.service.d/http-proxy.conf (after creating the docker.service.d directory and conf file):
[Service]
Environment="HTTP_PROXY=http://localhost:3213/"
Environment="HTTPS_PROXY=http://localhost:3213/"
In the Astrill app (I'm sure other provider application provide something similar) there is an option for vpn sharing which will create a proxy; it can be found under settings => vpn sharing.
For git, npm and wget setting the proxy in the ui (gnome-control-center => Network => network proxy) is enough but when doing a sudo it's better to do a sudo su, set the env and then run the command needing a proxy, for example:
sudo su
export http_proxy=http://localhost:3213/
export ftp_proxy=http://localhost:3213/
export all_proxy=socks://localhost:3213/
export https_proxy=http://localhost:3213/
export no_proxy=localhost,127.0.0.0/8,::1
export NO_PROXY="/var/run/docker.sock"
npm install -g ...
I'd like to update the solution for people who still encounter this issue today
I don't know the details, but when using the wireguard protocol on Astrill, docker build and docker run will use the VPN. If for some reason it doesn't work, try restarting the docker service sudo service docker restart while the VPN is active
Hope it helps, I just wasted one hour trying to figure out why it stopped working

Docker service does not start

docker is gigving me a hard time currently. I followed these instructions in order to install docker on my virtual server running Ubuntu 14.04 hosted by strato.de.
wget -qO- https://get.docker.com/ | sh
Executing this line runs me directly into this error message:
modprobe: ERROR: ../libkmod/libkmod.c:507 kmod_lookup_alias_from_builtin_file() could not open builtin file '/lib/modules/3.13.0-042stab092.3/modules.builtin.bin'modprobe: FATAL: Module aufs not found.
Warning: current kernel is not supported by the linux-image-extra-virtual
package. We have no AUFS support. Consider installing the packages linux-image-virtual kernel and linux-image-extra-virtual for AUFS support.
After the installation was done, I installed the two mentioned packages. Now my problem is that I can't get docker to run.
service docker start
results in:
start: Job failed to start
docker -d
results in
INFO[0000] +job serveapi(unix:///var/run/docker.sock)
INFO[0000] Listening for HTTP on unix (/var/run/docker.sock)
ERRO[0000] 'overlay' not found as a supported filesystem on this host. Please ensure kernel is new enough and has overlay support loaded.
INFO[0000] +job init_networkdriver()
WARN[0000] Running modprobe bridge nf_nat failed with message: , error: exit status 1
package not installed
INFO[0000] -job init_networkdriver() = ERR (1)
FATA[0000] Shutting down daemon due to errors: package not installed
and
docker run hello-world
results in
FATA[0000] Post http:///var/run/docker.sock/v1.18/containers/create: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?
Does anybody have a clue about what dependencies could be missing? What else could have gone wrong? Are there any logs which docker provides?
I'm searching back and forth for a solution, but couldn't find one.
Just to mention this is a fresh Ubuntu 14.04 setup. I didn't install any other services except for java. And the reason why I need docker is for using the dockerimage of sharelatex.
I'm thankful for any help!
Here's what I tried/found out, hoping that it will save you some time or even help you solve it.
Docker's download script is trying to identify the kernel through uname -r to be able to install the right kernel extras for your host.
I suspect two problems:
My (united-hoster.de) and probably your provider use customized kernel images (eg. 3.13.0-042stab108.2) for virtual hosts. Since the script is explicitly looking for -generic in the name, the lookup fails.
While the naming problem would be easy to fix, I wasn't able to install the generic kernel extras with my hoster's custom kernel. It seems like using a upgrading the kernel does not work either, since it would affect all users/vHosts on the same physical machine. This is because the kernel is shared (stated in some support ticket).
To get around that ..
I skipped it, hoping that Docker would work without AUFS support, but it didn't.
I tried to force Docker to use devicemapper instead, but to no avail.
I see two options: get a dedicated host so you can mess with kernels and filesystems or a least let the docker installer do it or install the binaries manually.
You need to start docker
sudo start docker
and then
sudo docker run hello-world
I faced same problem on ubuntu 14.04, solved.
refer comment of Nino-K https://github.com/docker-library/hello-world/issues/3

Resources