Docker Desktop on Windows 10 crashes on switching to Windows Containers - docker

I've installed latest version which is Version: 2.1.0.4 (39773) as of today.
When I am switching to Windows Containers by clicking on right click menu of the docker status icon, I get following error after a while. I do have Hypder-V enabled on my machine.
Wondering if anyone experiencing this issue and what can the resolution be.
Attaching the log of docker for this
[14:45:57.808][VpnKit ][Info ] vpnkit.exe: 2 upstream DNS servers are configured
[14:45:57.808][VpnKit ][Info ] vpnkit.exe: New Gateway forward configuration: [{"protocol":"udp","external_port":53,"internal_ip":"127.0.0.1","internal_port":51700},{"protocol":"tcp","external_port":53,"internal_ip":"127.0.0.1","internal_port":58896}]
[14:45:57.808][VpnKit ][Info ] vpnkit.exe: Updating transparent HTTP redirection: {
[14:45:57.808][VpnKit ][Info ] "exclude": "",
[14:45:57.808][VpnKit ][Info ] "transparent_http_ports": [
[14:45:57.808][VpnKit ][Info ] 80
[14:45:57.808][VpnKit ][Info ] ],
[14:45:57.808][VpnKit ][Info ] "transparent_https_ports": [
[14:45:57.919][APIRequestLogger ][Info ] [467d6772-0076-4586-88cf-8fba2bdeb19b] GET http://unix/versions
[14:45:57.920][APIRequestLogger ][Info ] [467d6772-0076-4586-88cf-8fba2bdeb19b] GET http://unix/versions -> 200 OK took 0ms
[14:45:57.808][VpnKit ][Info ] 443
[14:45:57.808][VpnKit ][Info ] ]
[14:45:57.808][VpnKit ][Info ] }
[14:45:57.808][VpnKit ][Info ] vpnkit.exe: C:\Windows\System32\drivers\etc\hosts file has bindings for host.docker.internal gateway.docker.internal host.docker.internal gateway.docker.internal host.docker.internal gateway.docker.internal host.docker.internal gateway.docker.internal kubernetes.docker.internal
[14:45:57.909][ApiProxy ][Info ] time="2019-11-15T14:45:57+11:00" msg="🀠socket server listening : \\\\.\\pipe\\dockerGuiToDriver"
[14:45:57.924][ApiProxy ][Info ] time="2019-11-15T14:45:57+11:00" msg="docker proxy: ready"
[14:45:57.928][ApiProxy ][Error ] time="2019-11-15T14:45:57+11:00" msg="Unable to forward a named-pipe to the lifecycle-server: /forwards/expose/pipe returned unexpected status: 500"
[14:45:57.928][ApiProxy ][Error ] time="2019-11-15T14:45:57+11:00" msg="Unable to forward a named-pipe to the Linux docker engine: /forwards/expose/pipe returned unexpected status: 500"
[14:45:57.928][ApiProxy ][Info ] time="2019-11-15T14:45:57+11:00" level=info msg=waitForDockerUp
[14:45:57.928][ApiProxy ][Info ] time="2019-11-15T14:45:57+11:00" msg="🀠socket server starting : \\\\.\\pipe\\dockerGuiToDriver"
[14:45:57.928][ApiProxy ][Info ] time="2019-11-15T14:45:57+11:00" msg="Static DNS lookup table: map[docker-desktop.:19.16.65.3 docker-for-desktop.:19.16.65.3 docker.for.win.gateway.internal.:19.16.65.1 docker.for.win.host.internal.:19.16.65.2 docker.for.win.http.internal.:19.16.65.1 docker.for.win.localhost.:19.16.65.2 gateway.docker.internal.:19.16.65.1 host.docker.internal.:19.16.65.2 kubernetes.docker.internal.:19.16.65.3 vm.docker.internal.:19.16.65.3]"
[14:45:57.929][ApiProxy ][Info ] time="2019-11-15T14:45:57+11:00" msg="proxy >> HEAD /_ping\n"
[14:45:57.929][ApiProxy ][Info ] time="2019-11-15T14:45:57+11:00" msg="Writing C:\\Users\\AppData\\Roaming\\Docker\\gateway_forwards.json"
[14:45:57.931][ApiProxy ][Info ] time="2019-11-15T14:45:57+11:00" msg="proxy << HEAD /_ping (2.0048ms)\n"
[14:45:57.949][NamedPipeClient ][Info ] Received response for engine/start
[14:45:57.932][ApiProxy ][Info ] time="2019-11-15T14:45:57+11:00" msg="proxy >> GET /v1.40/info\n"
[14:45:57.932][GoBackendProcess ][Info ] error CloseWrite to: The pipe is being closed.
[14:45:57.939][ApiProxy ][Info ] time="2019-11-15T14:45:57+11:00" msg="proxy >> GET /v1.40/containers/json\n"
[14:45:57.941][ApiProxy ][Info ] time="2019-11-15T14:45:57+11:00" msg="proxy << GET /v1.40/containers/json (2.0037ms)\n"
[14:45:57.942][GoBackendProcess ][Info ] error CloseWrite to: The pipe is being closed.
[14:45:57.945][ApiProxy ][Info ] time="2019-11-15T14:45:57+11:00" msg="proxy << GET /v1.40/info (13.0036ms)\n"
[14:45:57.946][GoBackendProcess ][Info ] error CloseWrite to: The pipe is being closed.
[14:45:57.946][ApiProxy ][Info ] time="2019-11-15T14:45:57+11:00" msg="Docker is responding"
[14:45:57.949][DockerDaemonChecker][Info ] Docker daemon is running
[14:45:57.949][NamedPipeServer ][Info ] engine/start done in 00:00:00.3189743.
[14:45:58.141][ApiProxy ][Info ] time="2019-11-15T14:45:58+11:00" msg="proxy >> HEAD /_ping\n"
[14:45:58.143][ApiProxy ][Info ] time="2019-11-15T14:45:58+11:00" msg="proxy << HEAD /_ping (2.0074ms)\n"
[14:45:58.144][GoBackendProcess ][Info ] error CloseWrite to: The pipe is being closed.
[14:45:58.177][ApiProxy ][Info ] time="2019-11-15T14:45:58+11:00" msg="proxy >> GET /v1.40/info\n"
[14:45:58.189][ApiProxy ][Info ] time="2019-11-15T14:45:58+11:00" msg="proxy << GET /v1.40/info (11.9924ms)\n"
[14:45:58.190][GoBackendProcess ][Info ] error CloseWrite to: The pipe is being closed.
[14:45:58.755][ApiProxy ][Info ] time="2019-11-15T14:45:58+11:00" msg="proxy >> HEAD /_ping\n"
[14:45:58.758][ApiProxy ][Info ] time="2019-11-15T14:45:58+11:00" msg="proxy << HEAD /_ping (3.0077ms)\n"
[14:45:58.759][GoBackendProcess ][Info ] error CloseWrite to: The pipe is being closed.
[14:45:58.783][ApiProxy ][Info ] time="2019-11-15T14:45:58+11:00" msg="proxy >> GET /v1.40/version\n"
[14:45:58.797][ApiProxy ][Info ] time="2019-11-15T14:45:58+11:00" msg="proxy << GET /v1.40/version (14.0121ms)\n"
[14:45:58.798][GoBackendProcess ][Info ] error CloseWrite to: The pipe is being closed.
[14:45:58.808][Notifications ][Info ] Docker Desktop is running
[14:45:58.813][Notifications ][Error ] Time out has expired and the operation has not been completed.
[14:45:58.828][NamedPipeClient ][Info ] Sending app/version()...
[14:45:58.829][NamedPipeClient ][Info ] Received response for app/version
[14:45:58.829][NamedPipeClient ][Info ] Sending diagnostics/gather()...
[14:45:58.829][NamedPipeServer ][Info ] app/version()
[14:45:58.829][NamedPipeServer ][Info ] app/version done in 00:00:00.
[14:45:58.830][NamedPipeServer ][Info ] diagnostics/gather()
[14:46:02.797][VpnKit ][Info ] vpnkit.exe: Gateway forwards file C:\Users\AppData\Roaming\Docker\gateway_forwards.json has changed
[14:46:02.797][VpnKit ][Info ] vpnkit.exe: Reading gateway forwards file from C:\Users\ppData\Roaming\Docker\gateway_forwards.json
[14:46:02.799][VpnKit ][Info ] vpnkit.exe: New Gateway forward configuration: [{"protocol":"udp","external_port":53,"internal_ip":"127.0.0.1","internal_port":54694},{"protocol":"tcp","external_port":53,"internal_ip":"127.0.0.1","internal_port":59304}]
[14:46:13.196][CrashReport ][Info ] Diagnostics cancelled by user
[14:46:13.197][NamedPipeClient ][Info ] Sending app/version()...
[14:46:13.198][NamedPipeClient ][Info ] Received response for app/version
[14:46:13.198][NamedPipeClient ][Info ] Sending diagnostics/stop-gather()...
[14:46:13.199][NamedPipeClient ][Info ] Received response for diagnostics/stop-gather
[14:46:13.197][NamedPipeServer ][Info ] app/version()
[14:46:13.201][NamedPipeClient ][Error ] Unable to send diagnostics/gather: Object reference not set to an instance of an object.
[14:46:13.202][CrashReport ][Warning] Unable to gather diagnostics in Windows Service : (Object reference not set to an instance of an object.)
[14:46:13.198][NamedPipeServer ][Info ] app/version done in 00:00:00.0009991.
[14:46:13.204][CrashReport ][Warning] Starting to gather diagnostics as User : 'C:\Program Files\Docker\Docker\Resources\com.docker.diagnose.exe' gather.
[14:46:13.198][NamedPipeServer ][Info ] diagnostics/stop-gather()
[14:46:13.199][NamedPipeServer ][Info ] diagnostics/stop-gather done in 00:00:00.0010009.
[14:46:13.200][NamedPipeServer ][Error ] Unable to execute diagnostics/gather: Object reference not set to an instance of an object. at Docker.Backend.BackendService.GatherDiagnostics()
at Docker.Core.Pipe.NamedPipeServer.RunAction(String action, Object[] parameters)

In Docker Client - Settings - Make sure below highlighted check box is on. "Expose daemon on ...." . I was facing exact same error and tried multiple other solution but this was the main trick.

Related

How to enable rabbitmq plugin "rabbitmq_delayed_message_exchange" if rabbitmq was deployed using the rabbitmq operator in kubernetes

I deployed a rabbitmq instance using the rabbitmq operator in kubernetes. And I'm trying to enable the rabbitmq plugin: rabbitmq_delayed_message_exchange.
I tried defining my RabbitmqCluster as:
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: definition
spec:
replicas: 1
rabbitmq:
additionalPlugins:
- rabbitmq_management
- rabbitmq_delayed_message_exchange
service:
type: LoadBalancer
And then I ran kubectl apply -f definition.yaml
But my pod logs showed this:
...
2020-10-05T15:42:15.081783023Z 2020-10-05 15:42:15.081 [info] <0.535.0> Server startup complete; 6 plugins started.
2020-10-05T15:42:15.081802701Z * rabbitmq_prometheus
2020-10-05T15:42:15.08180602Z * rabbitmq_peer_discovery_k8s
2020-10-05T15:42:15.081808816Z * rabbitmq_peer_discovery_common
2020-10-05T15:42:15.081811359Z * rabbitmq_management
2020-10-05T15:42:15.08181387Z * rabbitmq_web_dispatch
2020-10-05T15:42:15.081825082Z * rabbitmq_management_agent
2020-10-05T15:42:15.081951576Z completed with 6 plugins.
...
There wasn't any reference to this plugin in the logs.
I went inside my rabbitmq pod and ran: rabbitmq-plugins list
Listing plugins with pattern ".*" ...
Configured: E = explicitly enabled; e = implicitly enabled
| Status: * = running on rabbit#definition-rabbitmq-server-0.definition-rabbitmq-headless.default
|/
[ ] rabbitmq_amqp1_0 3.8.8
[ ] rabbitmq_auth_backend_cache 3.8.8
[ ] rabbitmq_auth_backend_http 3.8.8
[ ] rabbitmq_auth_backend_ldap 3.8.8
[ ] rabbitmq_auth_backend_oauth2 3.8.8
[ ] rabbitmq_auth_mechanism_ssl 3.8.8
[ ] rabbitmq_consistent_hash_exchange 3.8.8
[ ] rabbitmq_event_exchange 3.8.8
[ ] rabbitmq_federation 3.8.8
[ ] rabbitmq_federation_management 3.8.8
[ ] rabbitmq_jms_topic_exchange 3.8.8
[E*] rabbitmq_management 3.8.8
[e*] rabbitmq_management_agent 3.8.8
[ ] rabbitmq_mqtt 3.8.8
[ ] rabbitmq_peer_discovery_aws 3.8.8
[e*] rabbitmq_peer_discovery_common 3.8.8
[ ] rabbitmq_peer_discovery_consul 3.8.8
[ ] rabbitmq_peer_discovery_etcd 3.8.8
[E*] rabbitmq_peer_discovery_k8s 3.8.8
[E*] rabbitmq_prometheus 3.8.8
[ ] rabbitmq_random_exchange 3.8.8
[ ] rabbitmq_recent_history_exchange 3.8.8
[ ] rabbitmq_sharding 3.8.8
[ ] rabbitmq_shovel 3.8.8
[ ] rabbitmq_shovel_management 3.8.8
[ ] rabbitmq_stomp 3.8.8
[ ] rabbitmq_top 3.8.8
[ ] rabbitmq_tracing 3.8.8
[ ] rabbitmq_trust_store 3.8.8
[e*] rabbitmq_web_dispatch 3.8.8
[ ] rabbitmq_web_mqtt 3.8.8
[ ] rabbitmq_web_mqtt_examples 3.8.8
[ ] rabbitmq_web_stomp 3.8.8
[ ] rabbitmq_web_stomp_examples 3.8.8
and checked the pod plugins/ directory:
README
accept-0.3.5.ez
amqp10_client-3.8.8.ez
amqp10_common-3.8.8.ez
amqp_client-3.8.8.ez
aten-0.5.5.ez
base64url-0.0.1.ez
cowboy-2.6.1.ez
cowlib-2.7.0.ez
credentials_obfuscation-2.2.0.ez
cuttlefish-2.4.1.ez
eetcd-0.3.3.ez
gen_batch_server-0.8.4.ez
getopt-1.0.1.ez
goldrush-0.1.9.ez
gun-1.3.3.ez
jose-1.10.1.ez
jsx-2.11.0.ez
lager-3.8.0.ez
observer_cli-1.5.4.ez
prometheus-4.6.0.ez
ra-1.1.6.ez
rabbit-3.8.8.ez
rabbit_common-3.8.8.ez
rabbitmq_amqp1_0-3.8.8.ez
rabbitmq_auth_backend_cache-3.8.8.ez
rabbitmq_auth_backend_http-3.8.8.ez
rabbitmq_auth_backend_ldap-3.8.8.ez
rabbitmq_auth_backend_oauth2-3.8.8.ez
rabbitmq_auth_mechanism_ssl-3.8.8.ez
rabbitmq_aws-3.8.8.ez
rabbitmq_consistent_hash_exchange-3.8.8.ez
rabbitmq_event_exchange-3.8.8.ez
rabbitmq_federation-3.8.8.ez
rabbitmq_federation_management-3.8.8.ez
rabbitmq_jms_topic_exchange-3.8.8.ez
rabbitmq_management-3.8.8.ez
rabbitmq_management_agent-3.8.8.ez
rabbitmq_mqtt-3.8.8.ez
rabbitmq_peer_discovery_aws-3.8.8.ez
rabbitmq_peer_discovery_common-3.8.8.ez
rabbitmq_peer_discovery_consul-3.8.8.ez
rabbitmq_peer_discovery_etcd-3.8.8.ez
rabbitmq_peer_discovery_k8s-3.8.8.ez
rabbitmq_prelaunch-3.8.8.ez
rabbitmq_prometheus-3.8.8.ez
rabbitmq_random_exchange-3.8.8.ez
rabbitmq_recent_history_exchange-3.8.8.ez
rabbitmq_sharding-3.8.8.ez
rabbitmq_shovel-3.8.8.ez
rabbitmq_shovel_management-3.8.8.ez
rabbitmq_stomp-3.8.8.ez
rabbitmq_top-3.8.8.ez
rabbitmq_tracing-3.8.8.ez
rabbitmq_trust_store-3.8.8.ez
rabbitmq_web_dispatch-3.8.8.ez
rabbitmq_web_mqtt-3.8.8.ez
rabbitmq_web_mqtt_examples-3.8.8.ez
rabbitmq_web_stomp-3.8.8.ez
rabbitmq_web_stomp_examples-3.8.8.ez
ranch-1.7.1.ez
recon-2.5.1.ez
stdout_formatter-0.2.4.ez
syslog-3.4.5.ez
sysmon_handler-1.3.0.ez
So it means that the plugin doesn't come integrated.
I also found this:
How to install rabbitmq plugin on kubernetes?
But there is no reference to the rabbitmq operator and it was asked in Jun 2018. Also in the rabbitmq operator config there isn't any reference on using lifecycle hooks to mount the ez file.
1 idea that comes to my mind is creating my own rabbitmq image referencing the rabbitmq official image and add the plugin.
FROM rabbitmq:3.8.8-management
RUN apt-get update
RUN apt-get install -y curl
RUN curl -L https://github.com/rabbitmq/rabbitmq-delayed-message-exchange/releases/download/v3.8.0/rabbitmq_delayed_message_exchange-3.8.0.ez > $RABBITMQ_HOME/plugins/rabbitmq_delayed_message_exchange-3.8.0.ez
RUN chown rabbitmq:rabbitmq $RABBITMQ_HOME/plugins/rabbitmq_delayed_message_exchange-3.8.0.ez
RUN rabbitmq-plugins enable --offline rabbitmq_delayed_message_exchange
RUN rabbitmq-plugins enable --offline rabbitmq_consistent_hash_exchange
A 2nd idea is to mount the file in the pod file directory by defining a configmap with the file and using volumeMounts but I couldn't find any reference to use volumeMounts with the rabbitmq operator.
Is there a preferred way or any other way to enable it?
This plugin can be enabled by using a custom Docker Rabbitmq image with the plugin installed:
FROM rabbitmq:3.8.8-management
RUN apt-get update
RUN apt-get install -y curl
RUN curl -L https://github.com/rabbitmq/rabbitmq-delayed-message-exchange/releases/download/v3.8.0/rabbitmq_delayed_message_exchange-3.8.0.ez > $RABBITMQ_HOME/plugins/rabbitmq_delayed_message_exchange-3.8.0.ez
RUN chown rabbitmq:rabbitmq $RABBITMQ_HOME/plugins/rabbitmq_delayed_message_exchange-3.8.0.ez
RUN rabbitmq-plugins enable --offline rabbitmq_delayed_message_exchange
RUN rabbitmq-plugins enable --offline rabbitmq_consistent_hash_exchange
Notice: For immutable results or in case you can't depend on external changes, download the plugin to your machine and use COPY instead of using RUN curl... Make sure to have the rabbitmq_delayed_message_exchange .ez file in your machine.
Push your image to a container registry.
And then define your RabbitmqCluster as:
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: definition
spec:
image: <container-registry>/<username>/rabbitmq-delayed-message-exchange:3.8.8-management
replicas: 1
rabbitmq:
additionalPlugins:
- rabbitmq_management
- rabbitmq_delayed_message_exchange
service:
type: LoadBalancer
Notice: Change the image to the one you pushed.

Artifactory oss install with Docker

I'm trying to install Artifactory oss using docker.
I'm running ubuntu 18.04 and docker 19.03.8
I followed the JFrog installation guide https://www.jfrog.com/confluence/display/JFROG/Installing+Artifactory#InstallingArtifactory-DockerInstallation
I did all the steps. Except that chown -R 1030:1030 $JFROG_HOME/artifactory/var must be run with sudo.
The container start. But when I'm going to http://myhost:8082/ui/ I only see a page with the JFrog logo displaying with a zoomin/zoomout effect.
I the logs I see
################################################################
### All services started successfully in 116.053 seconds ###
################################################################
2020-03-26T07:27:05.070Z [jfrou] [INFO ] [ ] [server_configuration.go:61 ] [main] - Server configuration reloaded on localhost:8046
2020-03-26T07:27:05.070Z [jfrou] [INFO ] [ ] [server_configuration.go:61 ] [main ] - Server configuration reloaded on localhost:8049
2020-03-26T07:27:05.071Z [jfrou] [INFO ] [ ] [server_configuration.go:61 ] [main ] - Server configuration reloaded on :8082
2020-03-26T07:27:05.109Z [jfac ] [INFO ] [ ] [alConfigurationServiceBase:182] [c-default-executor-1] - Loading configuration from db finished successfully
2020-03-26T07:27:07.104Z [jfrou] [INFO ] [ ] [server_configuration.go:61 ] [main ] - Server configuration reloaded on :8082
2020-03-26T07:27:07.105Z [jfrou] [INFO ] [ ] [server_configuration.go:61 ] [main ] - Server configuration reloaded on localhost:8046
2020-03-26T07:27:07.105Z [jfrou] [INFO ] [ ] [server_configuration.go:61 ] [main ] - Server configuration reloaded on localhost:8049
2020-03-26T07:27:10.084Z [jfrou] [WARN ] [6ec6165e7fec2711] [ternal_topology_verifier.go:92] [main ] - failed pinging external node 'f461d2eebfe3' at address 'http://172.17.0.2:8082': Get http://172.17.0.2:8082/router/api/v1/system/ping: context deadline exceeded
The last line appears when I request the url in the browser.
What can I do ?
Edit:
I also tried with docker-compose also following the jfrog guide.
First run : artifactory is not starting !
After editing the .jfrog/artifactory/var/etc/system.yaml and chaning the 127.0.0.1 by my host name followed again with the config.sh artifactory is starting.
But same problem when accessing http://myhost:8082/ui/
I don't understand what's happening and why it is not working following the jfrog guides...
In my case, it turns out to be that my proxy settings are blocking the http client from contacting the local endpoint.
I updated docker-compose.yml to include no_proxy and noproxy environment variables and Artifactory runs without any complaint.
services:
artifactory:
environment:
- http_proxy=*********
- https_proxy=*********
- no_proxy=*********
- noproxy=172.16.0.0/12
image: docker.bintray.io/jfrog/artifactory-oss:latest
The solution was quite simple : try another browser !
With Edge it's not working.
With Firefox it's working ....

Docker for Windows - Cannot list nodes: Get https://kubernetes.docker.internal:6443/api/v1/nodes: Forbidden

I try to run Kubernetes from Docker for Windows. After I click on Enable Kubernetes inside the Kubernetes Tab the Kuberneters is starting... process running into a endless state.
Take look in the service.txt log in C:\ProgramData\DockerDesktop\pki, Docker repeat the following log-block for the whole time.
[10:23:26.068][ApiProxy ][Error ] time="2020-01-14T10:23:26+01:00" msg="Cannot list nodes: Get https://kubernetes.docker.internal:6443/api/v1/nodes: Forbidden"
[10:23:27.080][ApiProxy ][Error ] time="2020-01-14T10:23:27+01:00" msg="Cannot list nodes: Get https://kubernetes.docker.internal:6443/api/v1/nodes: Forbidden"
[10:23:28.071][ApiProxy ][Error ] time="2020-01-14T10:23:28+01:00" msg="Cannot list nodes: Get https://kubernetes.docker.internal:6443/api/v1/nodes: Forbidden"
[10:23:28.624][ApiProxy ][Info ] time="2020-01-14T10:23:28+01:00" msg="DNS failure: www-cache.\tIN\t A: errno 9002: DnsQuery: DNS-Serverfehler."
[10:23:28.626][ApiProxy ][Info ] time="2020-01-14T10:23:28+01:00" msg="DNS failure: www-cache.\tIN\t AAAA: errno 9002: DnsQuery: DNS-Serverfehler."
[10:23:29.068][ApiProxy ][Error ] time="2020-01-14T10:23:29+01:00" msg="Cannot list nodes: Get https://kubernetes.docker.internal:6443/api/v1/nodes: Forbidden"
[10:23:30.083][ApiProxy ][Error ] time="2020-01-14T10:23:30+01:00" msg="Cannot list nodes: Get https://kubernetes.docker.internal:6443/api/v1/nodes: Forbidden"
[10:23:31.088][ApiProxy ][Error ] time="2020-01-14T10:23:31+01:00" msg="Cannot list nodes: Get https://kubernetes.docker.internal:6443/api/v1/nodes: Forbidden"
[10:23:32.068][ApiProxy ][Error ] time="2020-01-14T10:23:32+01:00" msg="Cannot list nodes: Get https://kubernetes.docker.internal:6443/api/v1/nodes: Forbidden"
[10:23:32.715][ApiProxy ][Info ] time="2020-01-14T10:23:32+01:00" msg="DNS failure: www-cache.\tIN\t AAAA: errno 9002: DnsQuery: DNS-Serverfehler."
[10:23:32.717][ApiProxy ][Info ] time="2020-01-14T10:23:32+01:00" msg="DNS failure: www-cache.\tIN\t A: errno 9002: DnsQuery: DNS-Serverfehler."
[10:23:33.068][ApiProxy ][Error ] time="2020-01-14T10:23:33+01:00" msg="Cannot list nodes: Get https://kubernetes.docker.internal:6443/api/v1/nodes: Forbidden"
[10:23:34.074][ApiProxy ][Error ] time="2020-01-14T10:23:34+01:00" msg="Cannot list nodes: Get https://kubernetes.docker.internal:6443/api/v1/nodes: Forbidden"
[10:23:34.658][ApiProxy ][Info ] time="2020-01-14T10:23:34+01:00" msg="DNS failure: www-cache.\tIN\t A: errno 9002: DnsQuery: DNS-Serverfehler."
[10:23:34.661][ApiProxy ][Info ] time="2020-01-14T10:23:34+01:00" msg="DNS failure: www-cache.\tIN\t AAAA: errno 9002: DnsQuery: DNS-Serverfehler."
[10:23:35.069][ApiProxy ][Error ] time="2020-01-14T10:23:35+01:00" msg="Cannot list nodes: Get https://kubernetes.docker.internal:6443/api/v1/nodes: Forbidden"
[10:23:36.074][ApiProxy ][Error ] time="2020-01-14T10:23:36+01:00" msg="Cannot list nodes: Get https://kubernetes.docker.internal:6443/api/v1/nodes: Forbidden"
[10:23:37.070][ApiProxy ][Error ] time="2020-01-14T10:23:37+01:00" msg="Cannot list nodes: Get https://kubernetes.docker.internal:6443/api/v1/nodes: Forbidden"
[10:23:38.072][ApiProxy ][Error ] time="2020-01-14T10:23:38+01:00" msg="Cannot list nodes: Get https://kubernetes.docker.internal:6443/api/v1/nodes: Forbidden"
[10:23:39.072][ApiProxy ][Error ] time="2020-01-14T10:23:39+01:00" msg="Cannot list nodes: Get https://kubernetes.docker.internal:6443/api/v1/nodes: Forbidden"
[10:23:39.681][ApiProxy ][Info ] time="2020-01-14T10:23:39+01:00" msg="DNS failure: www-cache.\tIN\t AAAA: errno 9002: DnsQuery: DNS-Serverfehler."
[10:23:39.684][ApiProxy ][Info ] time="2020-01-14T10:23:39+01:00" msg="DNS failure: www-cache.\tIN\t A: errno 9002: DnsQuery: DNS-Serverfehler."
[10:23:40.069][ApiProxy ][Error ] time="2020-01-14T10:23:40+01:00" msg="Cannot list nodes: Get https://kubernetes.docker.internal:6443/api/v1/nodes: Forbidden"
[10:23:41.076][ApiProxy ][Error ] time="2020-01-14T10:23:41+01:00" msg="Cannot list nodes: Get https://kubernetes.docker.internal:6443/api/v1/nodes: Forbidden"
[10:23:42.089][ApiProxy ][Error ] time="2020-01-14T10:23:42+01:00" msg="Cannot list nodes: Get https://kubernetes.docker.internal:6443/api/v1/nodes: Forbidden"
[10:23:42.745][ApiProxy ][Info ] time="2020-01-14T10:23:42+01:00" msg="DNS failure: www-cache.\tIN\t A: errno 9002: DnsQuery: DNS-Serverfehler."
[10:23:42.748][ApiProxy ][Info ] time="2020-01-14T10:23:42+01:00" msg="DNS failure: www-cache.\tIN\t AAAA: errno 9002: DnsQuery: DNS-Serverfehler."
[10:23:43.071][ApiProxy ][Error ] time="2020-01-14T10:23:43+01:00" msg="Cannot list nodes: Get https://kubernetes.docker.internal:6443/api/v1/nodes: Forbidden"
[10:23:44.088][ApiProxy ][Error ] time="2020-01-14T10:23:44+01:00" msg="Cannot list nodes: Get https://kubernetes.docker.internal:6443/api/v1/nodes: Forbidden"
[10:23:44.758][VpnKit ][Info ] vpnkit.exe: Expired 256 UDP NAT rules
Trouble Shooting:
Proxy-Settings
my machine is behind a proxy so I add the corresponding informations inside the Proxy tab
No changes
Ping kubernetes.docker.internal
Ping wird ausgeführt für kubernetes.docker.internal [127.0.0.1] mit 32 Bytes Daten:
Antwort von 127.0.0.1: Bytes=32 Zeit<1ms TTL=128
Antwort von 127.0.0.1: Bytes=32 Zeit<1ms TTL=128
Antwort von 127.0.0.1: Bytes=32 Zeit<1ms TTL=128
Antwort von 127.0.0.1: Bytes=32 Zeit<1ms TTL=128
Ping-Statistik für 127.0.0.1:
Pakete: Gesendet = 4, Empfangen = 4, Verloren = 0
Proxy: Ignore Local Address
Therefore kubernetes.docker.internal is a local address I add the address to the proxy ignoring list in docker and in my machine internetoptions.
No changes
Install ca.crt from C:\ProgramData\DockerDesktop\pki
I also try to add the docker .crt to the trusted certificates of my machine
No changes
Remove PKI and Reset Kubernetes Cluster
the endless state of starting kubernetes is not rare, so I found a lot suggestions to handle on github. The most working suggestions are about remove stuff and reseting docker. I try all of them multiple times.
No changes
Call https://kubernetes.docker.internal:6443/api/v1/nodes in Browser
Client: Docker Engine - Community
Version: 19.03.5
API version: 1.40
Go version: go1.12.12
Git commit: 633a0ea
Built: Wed Nov 13 07:22:37 2019
OS/Arch: windows/amd64
Experimental: false
Testing on a Windows 10 Machine.
I spread all my shots and no more clue what to do.
I'm having the same problem and it seems, that the k8s API doesn't want to answer to the TLS Client hello message. I checked the traffic with Wireshark on the local interface (this is the one used for kubernetes.docker.internal). The TCP session setup is working properly.
I also checked in the "Show system containers (advanced)" option in the docker for windows settings under the kubernetes tab, but the "docker ps -a" does not show up any container (I'm not sure it should, but the option's name suggest that to me).
I would gladly continue the debugging and see whether the API service is actually running in the HyperV virtual machine that provides docker in Windows, but I'm not able to connect to it through the Hyper-V Manager. Any idea, how to check that and get the logs for the service?
I highly recommend you to get K8s up when Windows Firewall fully OFF AND connected to a home network. Booting Docker & K8s while connected to the corporate network causes it to hang again at "Kubernetes is starting..."
Another solution
1. Change DNS to fixed and use 8.8.8.8, this is within docker for
window's settings
2. Remove the .kube
3. Add the KUBECONFIG environment variable to System Variables and have
the path be C:\Users[MYUSER].kube\config. Note that before I had
it set as a User Variable.
4. Restart Docker from the Docker for Window's reset tab in settings.
5. Restart Kubernetes Cluster from the Docker for Window's reset tab in
settings (you can do this a number of times).
Afterwards just wait for some time and Kubernetes is running should display
Take a look here: kubernetes-fails-to-start.
I hope it helps.

docker mount on windows, directory is empty

Running a simple docker test.
docker run --rm -v c:/Users:/data alpine ls -al /data
Results in the following output.
total 4
drwxr-xr-x 2 root root 40 Oct 19 09:02 .
drwxr-xr-x 1 root root 4096 Oct 19 09:05 ..
the directory is empty, in windows it contains the user(s).
edit
Further to this, looking at the docker log file, I see this:
[11:11:02.873][SambaShare ][Info ] Creating share "C:\" as "C" with Full Control to "QXV0615"
[11:11:02.957][Cmd ][Info ] C was shared successfully.
[11:11:03.005][Cmd ][Info ] Share name C
[11:11:03.005][Cmd ][Info ] Path C:\
[11:11:03.005][Cmd ][Info ] Remark
[11:11:03.005][Cmd ][Info ] Maximum users No limit
[11:11:03.005][Cmd ][Info ] Users
[11:11:03.005][Cmd ][Info ] Caching Caching disabled
[11:11:03.006][Cmd ][Info ] Permission W9\QXV0615, FULL
[11:11:03.006][Cmd ][Info ] The command completed successfully.
[11:11:03.009][SambaShare ][Info ] "C" is shared
[11:11:03.011][SambaShare ][Error ] Unable to validate cred: Invalid username or password
[11:11:03.011][SambaShare ][Info ] Removing share C
[11:11:03.053][NamedPipeClient][Info ] Received response for Mount
seems there is a samba credentials issue? How do I fix the credentials?
VPN ISSUE
The problem occurs when my CISCO VPN is connected.
Volumes will not work when the VPN is connected. I have tried the suggestions here, but no dice!
https://github.com/boot2docker/boot2docker/issues/628
https://github.com/docker/for-win/issues/360

How to stop/start logstash service running in docker

I'm trying to figure out how logstash works/run inside docker, and I'm stuck with simple thing like starting and stoping logstash.
I have started logstash docker container with simple run
docker run -it --name l2 logstash
and with result:
[Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
Next thing is runing /bin/bash with exec command, to get inside running container.
docker exec -it l2 /bin/bash
root#1b55d3a40d3f:/#
Listing services status, shows that there is no logstash service running.
Where can I find logstash service and stop/start?
root#1b55d3a40d3f:/# service --status-all
[ - ] bootlogs
[ - ] bootmisc.sh
[ - ] checkfs.sh
[ - ] checkroot-bootclean.sh
[ - ] checkroot.sh
[ - ] dbus
[ - ] hostname.sh
[ ? ] hwclock.sh
[ - ] killprocs
[ - ] motd
[ - ] mountall-bootclean.sh
[ - ] mountall.sh
[ - ] mountdevsubfs.sh
[ - ] mountkernfs.sh
[ - ] mountnfs-bootclean.sh
[ - ] mountnfs.sh
[ - ] procps
[ - ] rc.local
[ - ] rmnologin
[ - ] sendsigs
[ + ] udev
[ ? ] udev-finish
[ - ] umountfs
[ - ] umountnfs.sh
[ - ] umountroot
[ - ] urandom
[ - ] x11-common
The logstash in the container is not run as a system service, the entrypoint in the image will start a process and will keep the container up until this process ends or fails.
If you do a docker top l2 it will show the logstash process running (probaly alone) in the container.
To stop the logstash, you need to stop the container with docker stop l2, and later when you need to start it again you can run docker start l2, it will work as long you set the containers name as l2 when you create or first run it.
Docker Start help: https://docs.docker.com/engine/reference/commandline/start/
Docker stop help: https://docs.docker.com/engine/reference/commandline/stop/
Docker create: https://docs.docker.com/engine/reference/commandline/create/

Resources