runcmd in LXC profile ignored - lxc

I want to run a command for each container that is created using my profile. The profile looks like this:
config:
user.user-data: |
#cloud-config
package_upgrade: true
packages:
- python3
runcmd:
- "touch /tmp/test.txt"
description: Description
devices:
eth0:
name: eth0
nictype: macvlan
parent: enp4s0
type: nic
root:
path: /
pool: default
type: disk
name: my-profile
The /tmp/test.txt file is just not there when I log into my running container. And that is an issue. The worst part is that there are no error messages or any other logs that could help.
According to their documentation, runcmd does exist as an option in "cloud-init".

Which image are you using?
https://linuxcontainers.org/lxd/docs/master/cloud-init/#cloud-init
Before trying to use it, however, first determine which image source you are about to use as not all images have the cloud-init package installed.
The images from the ubuntu and ubuntu-daily remotes are all cloud-init enabled. Images from the imagesremote havecloud-initenabled variants using the/cloudsuffix, e.g.images:ubuntu/22.04/cloud`.

Related

ansible script to install docker repo on centos server with no internet access

looking for an ansible script to install docker packages on the centos8 server with no internet access.
I have tried the below on my test server(which has internet access) but the actual server doesn't have access to the internet and looking out for options.
---
- hosts: localhost
becomes: true
tasks:
- name: Install yum utils
yum:
name: yum-utils
state: latest
- name: Add Docker repo
get_url:
url: https://download.docker.com/linux/centos/docker-ce.repo
dest: /etc/yum.repos.d/docer-ce.repo
become: yes
If only the target server isn't connected to internet, you can get the file from the controller and push it to the target:
- name: Get Docker repo definition locally
ansible.builtin.get_url:
url: https://download.docker.com/linux/centos/docker-ce.repo
dest: /tmp/docer-ce.repo
changed_when: false
delegate_to: localhost
run_once: true
- name: Copy the repo file to target(s)
ansible.builtin.copy:
src: /tmp/docer-ce.repo
dest: /etc/yum.repos.d/docer-ce.repo
become: yes
In the above:
I considered getting the ref file from url to controller being a non-event as far as idempotence is concerned, hence why the changed_when: false.
The copy task will report change if the file has to be created or was modified if it differs on target from the fetched reference.*
run_once: true ensures the file is fetched only once whatever the number of target hosts in your play loop. The copy task will run for each target and push the file with the same content
If both the controller and target are not connected to internet, you will have to get the repo file on the controller somehow before copying (and maintain it over time)
You could provide the RPM from the ansible controller as part of your playbook then install it using the yum module.
- name: Install nginx rpm from a local file
ansible.builtin.yum:
name: /usr/local/src/nginx-release-centos-6-0.el6.ngx.noarch.rpm
state: present

How to grant non root user wrote permissions on kubernetes SBM flexvolume mount?

Please help! I'm struggling with this for a few days now...
I'm trying to write to a mount in a Kubernetes pod with a non-root user and getting access denied.
In the Kubernetes manifest, I am mounting a windows shared folder like this:
kind: Deployment
apiVersion: apps/v1
metadata:
name: centos-deployment
spec:
template:
spec:
volumes:
- name: windows-mount
flexVolume:
driver: microsoft.com/smb
secretRef:
name: centos-credentials
options:
mountOptions: 'cifs,dir_mode=0777,file_mode=0777'
source: //100.200.300.400/windows-share
containers:
- name: centos-pod
image: 'centos:latest'
command:
- sh
- '-c'
- sleep 1000000
volumeMounts:
- name: windows-mount
mountPath: /var/windows-share
and in the Dockerfile I'm changing to application user like so:
# Drop from 'root' user to 'nobody' (user with no privileges).
USER nobody:nobody
But now, the mount is owned by "root". The "root" user can write to the path but the user "nobody" cannot.
I tried init container to run chmod -R 775 on the folder, but it looks like the root user cannot change the permissions or ownership of the mount. (umask command returned 022)
If I exec into the pod, I can see the mount is set with 755 permissions instead of 777
"file_mode=0755,dir_mode=0755"
[root#centos-deployment-5d46bd8b89-tzghs /]# mount | grep windows-share
//100.200.300.400/windows-share on /var/windows-share type cifs (rw,relatime,vers=default,cache=strict,username=*******,domain=,uid=0,noforceuid,gid=0,noforcegid,addr=100.200.300.400,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1)
Any idea how to mount a Windows share so that it is writable by non-root user?
Thanks! any help will be very appreciated
Full reference: https://linux.die.net/man/8/mount.cifs
Try playing with the mountOptions. For example:
uid=arg - sets the uid that will own all files or directories on the mounted filesystem when the server does not provide ownership information. It may be specified as either a username or a numeric uid. When not specified, the default is uid 0. The mount.cifs helper must be at version 1.10 or higher to support specifying the uid in non-numeric form. See the section on FILE AND DIRECTORY OWNERSHIP AND PERMISSIONS below for more information.
volumes:
- name: windows-mount
...
options:
mountOptions: 'cifs,uid=<YOUR_USERNAME>,dir_mode=0777,file_mode=0777'
If this doesen't work you can also try adding noperm mount option.
noperm - Client does not do permission checks. This can expose files on this mount to access by other users on the local client system. It is typically only needed when the server supports the CIFS Unix Extensions but the UIDs/GIDs on the client and server system do not match closely enough to allow access by the user doing the mount. Note that this does not affect the normal ACL check on the target machine done by the server software (of the server ACL against the user name provided at mount time).
volumes:
- name: windows-mount
...
options:
mountOptions: 'cifs,dir_mode=0777,file_mode=0777,noperm'
About using chmod/chown on CIFS
The core CIFS protocol does not provide unix ownership information or mode for files and directories. Because of this, files and directories will generally appear to be owned by whatever values the uid= or gid= options are set, and will have permissions set to the default file_mode and dir_mode for the mount. Attempting to change these values via chmod/chown will return success but have no effect.

Traefik: no space left on device

I'm trying to enable file provider for registering dynamic configuration, but I get the error:
Cannot start the provider *file.Provider: error adding file watcher: no space left on device
Traefik uses fsnotify for adding new watchers and it get a limit from variable of Linux: /proc/sys/fs/inotify/max_user_watches
I tried to change the variable inside the docker container by sudo:
sudo sysctl -w fs.inotify.max_user_watches=12288
but I'm getting a error:
sysctl: error setting key 'fs.inotify.max_user_watches': Read-only file system
Traefik configuration:
entryPoints:
web:
address: ":80"
websecure:
address: ":443"
providers:
docker: {}
file:
directory: '/config'
watch: true
api:
dashboard: true
certificatesResolvers:
le:
acme:
email: myemail#mail.com
storage: acme.json
httpChallenge:
entryPoint: web
Traefik version: 2.2.1
When I run traefik on another machine or my Mac or when I set a watch of configuration to false then it works like a charm, but I need to watch file changes
Please, tell me how I can change the variable by sudo in Alpine container or how to solve this issue in another way
Well, I try to change max_user_watches inside docker container. It was a wrong idea. I need to change max_user_watches inside my linux machine where I run docker containers.
After run command:
sudo sysctl -w fs.inotify.max_user_watches=12288
it worked fine

Openshift: any deployment resulted in Application is not available

Fist time deploying to OpenShift (actually minishift in my Windows 10 Pro). Any sample application I deploied successfully resulted in:
From Web Console I see a weird message "Build #1 is pending" although I saw it was successfully from PowerShell
I found someone fixing similiar issue changing to 0.0.0.0 (enter link description here) but I give a try and it isn't the solution in my case.
Here are the full logs and how I am deploying
PS C:\to_learn\docker-compose-to-minishift\first-try> oc new-app https://github.com/openshift/nodejs-ex warning: Cannot check if git requires authentication.
--> Found image 93de123 (16 months old) in image stream "openshift/nodejs" under tag "10" for "nodejs"
Node.js 10.12.0
---------------
Node.js available as docker container is a base platform for building and running various Node.js applications and frameworks. Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.
Tags: builder, nodejs, nodejs-10.12.0
* The source repository appears to match: nodejs
* A source build using source code from https://github.com/openshift/nodejs-ex will be created
* The resulting image will be pushed to image stream tag "nodejs-ex:latest"
* Use 'start-build' to trigger a new build
* WARNING: this source repository may require credentials.
Create a secret with your git credentials and use 'set build-secret' to assign it to the build config.
* This image will be deployed in deployment config "nodejs-ex"
* Port 8080/tcp will be load balanced by service "nodejs-ex"
* Other containers can access this service through the hostname "nodejs-ex"
--> Creating resources ...
imagestream.image.openshift.io "nodejs-ex" created
buildconfig.build.openshift.io "nodejs-ex" created
deploymentconfig.apps.openshift.io "nodejs-ex" created
service "nodejs-ex" created
--> Success
Build scheduled, use 'oc logs -f bc/nodejs-ex' to track its progress.
Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
'oc expose svc/nodejs-ex'
Run 'oc status' to view your app.
PS C:\to_learn\docker-compose-to-minishift\first-try> oc get bc/nodejs-ex -o yaml apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: 2020-02-20T20:10:38Z
labels:
app: nodejs-ex
name: nodejs-ex
namespace: samplepipeline
resourceVersion: "1123211"
selfLink: /apis/build.openshift.io/v1/namespaces/samplepipeline/buildconfigs/nodejs-ex
uid: 1003675e-541d-11ea-9577-080027aefe4e
spec:
failedBuildsHistoryLimit: 5
nodeSelector: null
output:
to:
kind: ImageStreamTag
name: nodejs-ex:latest
postCommit: {}
resources: {}
runPolicy: Serial
source:
git:
uri: https://github.com/openshift/nodejs-ex
type: Git
strategy:
sourceStrategy:
from:
kind: ImageStreamTag
name: nodejs:10
namespace: openshift
type: Source
successfulBuildsHistoryLimit: 5
triggers:
- github:
secret: c3FoC0RRfTy_76WEOTNg
type: GitHub
- generic:
secret: vlKqJQ3ZBxfP4HWce_Oz
type: Generic
- type: ConfigChange
- imageChange:
lastTriggeredImageID: 172.30.1.1:5000/openshift/nodejs#sha256:3cc041334eef8d5853078a0190e46a2998a70ad98320db512968f1de0561705e
type: ImageChange
status:
lastVersion: 1

Docker Desktop for Windows: No hypervisor is present on this system

I am new to Docker Desktop for Windows. I am getting an error when I tried the hello world example by following this. Update
Below is the steps I followed:
1 Installed Docker for Windows, stable version
2 Both Hyper-V and Virtualization have been enabled on my Windows 10
However, error below when switch to Linux container:
An error occurred.
Hardware assisted virtualization and data execution protection must be enabled in the BIOS. See https://docs.docker.com/docker-for-windows/troubleshoot/#virtualization-must-be-enabled
Please note the problem in this post occurs when using Windows containers. Step 3 is using Windows containers, not Linux.
3 Error below when trying out hello world
PS C:\Users\'#.lp> docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
407ada6e90de: Pull complete
9c9e16cbf19f: Pull complete
2cb715c55064: Pull complete
990867d1296d: Pull complete
Digest: sha256:445b2fe9afea8b4aa0b2f27fe49dd6ad130dfe7a8fd0832be5de99625dad47cd
Status: Downloaded newer image for hello-world:latest
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: container e646da0e13b5c2ba92db3ade35f6a334f9c2903efde26a78765f55f0498a86f1 encountered an error during CreateContainer: failure in a Windows system call: No hypervisor is present on this system. (0xc0351000) extra info: {"SystemType":"Container","Name":"e646da0e13b5c2ba92db3ade35f6a334f9c2903efde26a78765f55f0498a86f1","Owner":"docker","IgnoreFlushesDuringBoot":true,"LayerFolderPath":"C:\\ProgramData\\Docker\\windowsfilter\\e646da0e13b5c2ba92db3ade35f6a334f9c2903efde26a78765f55f0498a86f1","Layers":[{"ID":"84cbd4e4-1a6a-5e55-86fa-927ba5be73e0","Path":"C:\\ProgramData\\Docker\\windowsfilter\\417caa6a366bad6fe0d68d2b459510e4c50fda5686b37fb91c9363ca103e9475"},{"ID":"e747017d-859e-5513-b9ad-346002efc167","Path":"C:\\ProgramData\\Docker\\windowsfilter\\43e4d5eeaebc150ea9da0bf919302a2d7646461e3da60b5cbd3db15d3d928698"},{"ID":"e0bd7f8a-622c-589f-9752-eb7b80b88973","Path":"C:\\ProgramData\\Docker\\windowsfilter\\e8ee5f9ec8d67bfebe230b67989dd788506e33627a4400bb63ba098b2a3fd733"},{"ID":"6f13d213-2d8c-5c37-b1f5-770f73ad2d9a","Path":"C:\\ProgramData\\Docker\\windowsfilter\\a731844c4d933200e984524b7273ac3a555792bafec6eab30722fdfd7992ee96"}],"HostName":"e646da0e13b5","HvPartition":true,"EndpointList":["0b88e638-56ea-4157-88a7-67fc3bc35958"],"HvRuntime":{"ImagePath":"C:\\ProgramData\\Docker\\windowsfilter\\e8ee5f9ec8d67bfebe230b67989dd788506e33627a4400bb63ba098b2a3fd733\\UtilityVM"},"AllowUnqualifiedDNSQuery":true}.
System information below:
PS C:\Users\'#.lp> docker --version
Docker version 17.09.1-ce, build 19e2cf6
PS C:\Users\'#.lp> docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 17.09.1-ce
Storage Driver: windowsfilter
Windows:
Logging Driver: json-file
Plugins:
Volume: local
Network: ics l2bridge l2tunnel nat null overlay transparent
Log: awslogs etwlogs fluentd json-file logentries splunk syslog
Swarm: inactive
Default Isolation: hyperv
Kernel Version: 10.0 16299 (16299.15.amd64fre.rs3_release.170928-1534)
Operating System: Windows 10 Pro
OSType: windows
Architecture: x86_64
CPUs: 4
Total Memory: 7.999GiB
Name: username
ID: 5EK5:6LMU:NPZG:3K2F:W3X7:2G7T:GFYU:GENE:LDBA:UASU:ZF26:T3AU
Docker Root Dir: C:\ProgramData\Docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: -1
Goroutines: 24
System Time: 2017-12-24T20:16:32.0728521Z
EventsListeners: 0
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
PS C:\Users\'#.lp> docker-compose --version
docker-compose version 1.17.1, build 6d101fb0
PS C:\Users\'#.lp> docker-machine --version
docker-machine.exe version 0.13.0, build 9ba6da9
Windows 10 Pro version 1709
Any idea?
Update:
PS C:\WINDOWS\system32> docker --version
Docker version 17.12.0-ce, build c97c6d6
PS C:\WINDOWS\system32> docker rm -f $(docker ps -a -q)
a7094c166be7
afbc956d0630
6cc2e3a20dcc
e646da0e13b5
PS C:\WINDOWS\system32> docker rmi -f $(docker images -q)
Untagged: hello-world:latest
Untagged: hello-world#sha256:445b2fe9afea8b4aa0b2f27fe49dd6ad130dfe7a8fd0832be5de99625dad47cd
Deleted: sha256:29528317da62a27024338f18abf29c992d6cdb4087f7d195cb6725bbe6bd15cc
Deleted: sha256:729a95d3f7234b02c27bdaf4fd81fd3fb9453445a85b713398c6bd05ad290ff5
Deleted: sha256:fcea8c486bda6858dee33a0ce494fba4839e542554b0588f6d00833a4155a537
Deleted: sha256:53cda6d9c060289530670af7ac429015f88d1ac58417f94f22c3dd2f03210436
Deleted: sha256:67903cf26ef4095868687002e3dc6f78ad275677704bf0d11524f16209cec48e
PS C:\WINDOWS\system32> docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
407ada6e90de: Pull complete
711a33cda32c: Pull complete
f2954926b3d8: Pull complete
8b6a3aeeca73: Pull complete
Digest: sha256:66ef312bbac49c39a89aa9bcc3cb4f3c9e7de3788c944158df3ee0176d32b751
Status: Downloaded newer image for hello-world:latest
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: container 99a306c2336a7bd503bfe8a744ace77cedc19bbc0d15e52b8d899bcea3db6b96 encountered an error during CreateContainer: failure in a Windows system call: No hypervisor is present on this system. (0xc0351000) extra info: {"SystemType":"Container","Name":"99a306c2336a7bd503bfe8a744ace77cedc19bbc0d15e52b8d899bcea3db6b96","Owner":"docker","IgnoreFlushesDuringBoot":true,"LayerFolderPath":"C:\\ProgramData\\Docker\\windowsfilter\\99a306c2336a7bd503bfe8a744ace77cedc19bbc0d15e52b8d899bcea3db6b96","Layers":[{"ID":"a5eef81d-74bf-53d1-8517-78b635324fdb","Path":"C:\\ProgramData\\Docker\\windowsfilter\\afb89f854af8452a0a12dfb14dc47995e001057c7af209be45ed5ee4813d2ffd"},{"ID":"744a6817-2b8a-5b6a-a717-8932a5863c9f","Path":"C:\\ProgramData\\Docker\\windowsfilter\\21a39c2b74ff220eac42f6f96d6097a7ef0feb192c1a77c0e88068cd10207d33"},{"ID":"ee281c98-febf-545b-bd51-8aec0a88f617","Path":"C:\\ProgramData\\Docker\\windowsfilter\\62439684561a3d30068cae2c804512984637d4c8b489f6f7cbcb5c8fed588af5"},{"ID":"f023cffb-ac18-57fe-9894-a2f1798fd0b0","Path":"C:\\ProgramData\\Docker\\windowsfilter\\1354f5a762901ec48bcf6a3ca8aab615bc305e91315e6e77fdf2c8fee5d587a2"}],"HostName":"99a306c2336a","HvPartition":true,"EndpointList":["2ce5269d-8776-4e84-8b37-4d99fa0a9f7b"],"HvRuntime":{"ImagePath":"C:\\ProgramData\\Docker\\windowsfilter\\62439684561a3d30068cae2c804512984637d4c8b489f6f7cbcb5c8fed588af5\\UtilityVM"},"AllowUnqualifiedDNSQuery":true}.
PS C:\WINDOWS\system32> systeminfo
Host Name: XXXX
OS Name: Microsoft Windows 10 Pro
OS Version: 10.0.16299 N/A Build 16299
OS Manufacturer: Microsoft Corporation
OS Configuration: Standalone Workstation
OS Build Type: Multiprocessor Free
Registered Owner: '#.lp
Registered Organization:
Product ID: XXXXXXXXXXXXXXXXXXXXXXXXXXX
Original Install Date: 10/12/2017, 23:15:17
System Boot Time: 06/01/2018, 13:53:55
System Manufacturer: System manufacturer
System Model: System Product Name
System Type: x64-based PC
Processor(s): 1 Processor(s) Installed.
[01]: Intel64 Family 6 Model 15 Stepping 11 GenuineIntel ~2401 Mhz
BIOS Version: American Megatrends Inc. 0902 , 27/07/2011
Windows Directory: C:\WINDOWS
System Directory: C:\WINDOWS\system32
Boot Device: \Device\HarddiskVolume1
System Locale: en-gb;English (United Kingdom)
Input Locale: en-gb;English (United Kingdom)
Time Zone: (UTC+00:00) Dublin, Edinburgh, Lisbon, London
Total Physical Memory: 8,191 MB
Available Physical Memory: 2,209 MB
Virtual Memory: Max Size: 16,383 MB
Virtual Memory: Available: 4,745 MB
Virtual Memory: In Use: 11,638 MB
Page File Location(s): C:\pagefile.sys
Domain: WORKGROUP
Logon Server: \\XXXXX
Hotfix(s): 7 Hotfix(s) Installed.
[01]: KB4048951
[02]: KB4053577
[03]: KB4054022
[04]: KB4055237
[05]: KB4056887
[06]: KB4058043
[07]: KB4054517
Network Card(s): 5 NIC(s) Installed.
[01]: TunnelBear Adapter V9
Connection Name: Ethernet
Status: Media disconnected
[02]: Qualcomm Atheros AR8131 PCI-E Gigabit Ethernet Controller (NDIS 6.30)
Connection Name: Local Area Connection
Status: Media disconnected
[03]: Compact Wireless-G USB Network Adapter
Connection Name: Wi-Fi
DHCP Enabled: Yes
DHCP Server: XXXXX
IP address(es)
[01]: XXX
[02]: XXX
[04]: Hyper-V Virtual Ethernet Adapter
Connection Name: vEthernet (Default Switch)
DHCP Enabled: Yes
DHCP Server: 255.255.255.255
IP address(es)
[01]: X
[02]: X
[05]: Hyper-V Virtual Ethernet Adapter
Connection Name: vEthernet (nat)
DHCP Enabled: Yes
DHCP Server: 255.255.255.255
IP address(es)
[01]: X
[02]: X
Hyper-V Requirements: VM Monitor Mode Extensions: Yes
Virtualization Enabled In Firmware: Yes
Second Level Address Translation: No
Data Execution Prevention Available: Yes
Update 2
Still getting the same error, any idea?
PS C:\Users\'#.lp> docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
e46172273a4e: Pull complete
61703422ec93: Pull complete
a17b8d9caad6: Pull complete
2dccc7619f71: Pull complete
Digest: sha256:41a65640635299bab090f783209c1e3a3f11934cf7756b09cb2f1e02147c6ed8
Status: Downloaded newer image for hello-world:latest
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: CreateComputeSystem 755110bc7813700701f2325c921fad7a4220c8ff91d620ac51e258cb8b1ab700: No hypervisor is present on this system.
(extra info: {"SystemType":"Container","Name":"755110bc7813700701f2325c921fad7a4220c8ff91d620ac51e258cb8b1ab700","Owner":"docker","IgnoreFlushesDuringBoot":true,"LayerFolderPath":"C:\\ProgramData\\Docker\\windowsfilter\\755110bc7813700701f2325c921fad7a4220c8ff91d620ac51e258cb8b1ab700","Layers":[{"ID":"535189fb-71a2-598a-bd98-f711c29cf301","Path":"C:\\ProgramData\\Docker\\windowsfilter\\5e4cc131c334b8171b269003b9659ba578f9528372dd28054624d0bbde003b4f"},{"ID":"93d17dd0-2837-5522-a207-2b9e009a9d2b","Path":"C:\\ProgramData\\Docker\\windowsfilter\\87d235bd8d5ca1534f7396bf90d96ee9012875f8ae0e56556af19ebce73cdf80"},{"ID":"6899fe53-2cd7-5ec6-8edc-bf8859eea3e7","Path":"C:\\ProgramData\\Docker\\windowsfilter\\f75a64ae1fe066c392738bc643e1f49f1f0ee0bce4214c8655714b7386cdc3fc"},{"ID":"efbc003d-b691-5d30-ad65-d7dff28ca9e8","Path":"C:\\ProgramData\\Docker\\windowsfilter\\74033dce6b43107101f831d96c6bebe0ceb1df34f8e5c82421ee3f296b20a70c"}],"HostName":"755110bc7813","HvPartition":true,"EndpointList":["93c1c71e-11b5-49d3-82fd-d467d9b625b6"],"HvRuntime":{"ImagePath":"C:\\ProgramData\\Docker\\windowsfilter\\f75a64ae1fe066c392738bc643e1f49f1f0ee0bce4214c8655714b7386cdc3fc\\UtilityVM"},"AllowUnqualifiedDNSQuery":true}).
PS C:\Users\'#.lp> docker --version
Docker version 18.09.2, build 6247962
PS C:\Users\'#.lp>
Here is what worked for me: Open command prompt as admin and run
bcdedit /set hypervisorlaunchtype auto
and then reboot
What had happened:
I had to start an android emulator and Android Studio said that "Emulator is incompatible With Hyper-V" so it ran this command to disable hyper-v bcdedit /set hypervisorlaunchtype off
These steps fixed it:
1. bcdedit /set hypervisorlaunchtype auto
2. reboot computer
3. docker run hello-world
if both docker and Hyper-V are installed try to recreate the image in docker. It works for me.
You can check the status of Hyper-V in system by typing the following command in powershell:
systeminfo
You should also switch to container for windows in docker if not already.
Had the same problem.
Had to enable the virtualization in the BIOS to solve it.
If you're attempting to run Docker for Windows inside a Parallels virtual machine, you must enable 'Nested Virtualization'.
https://kb.parallels.com/en/116239
This is only available in the Pro and Business Editions. I had to upgrade my version to support this as I was running Desktop.
If you are running Docker in a VM, may be you need look into "Nested Virtualization", the Virtualization need to be exposed from Physical Server to VM.
For example, expose Virtualization in Hyper-V platform throuth powershell scripts:
Set-VMProcessor -VMName <VMName> -ExposeVirtualizationExtensions $true
see the link:
Run Hyper-V in a Virtual Machine with Nested Virtualization
I had faced the same issue and it got resolved after upgrading my windows to latest version...
I did everything as suggested on this post and others to no avail. What did work for me was the following:
Turn Windows Features OFF : Hyper-V and Containers
Force an windows update to Windows 10 Pro ver: 1803
The update completed. Then when I started docker it asked me if I wanted to enable Hyper-V and Containers. I answered yes. The machine rebooted twice.
After this everything worked perfectly. Unfortunately I cannot say for sure if point 1 or point 2 or both together fixed the issue. I would suggest try point 1 above first followed by a reboot then starting docker. I suspect this rather than forcing a update to Windows 1803 will fix the problem.
For virtualbox users you need to enable nested virtualization
VM -> Configuration -> System -> Processor -> Enable nested VT-x/AMD-v
Run the following command on Windows Power Shell
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All
if it requires a restart, then just follow the steps.
more info or options, please check docs from microsoft
Step 1:
Uninstall Docker.
Step 2:
Open "Turn windows features on or off" from Control Panel.
Uncheck both features - "Containers" and "Hyper-V", if they are checked.
Step 3:
Reboot PC
Step 4:
Check both features - "Container" and "Hyper-V".
Step 5:
Reboot PC.
Step 6:
Install docker and execute docker run hello-world
BIOS LEVEL Virtualization is a must
Hyper-V and Containers Windows features must be enabled.
The following hardware prerequisites are required to successfully run Client Hyper-V on Windows 10:
64-bit processor with Second Level Address Translation (SLAT)
4GB system RAM
BIOS-level hardware virtualization support must be enabled in the BIOS settings.
https://docs.docker.com/docker-for-windows/install/

Resources