I am trying to setup 3 node kubernetes cluster on bare metal(1 master and 2 worker nodes). I am following the below links for setup https://www.linuxtechi.com/install-kubernetes-1-7-centos7-rhel7/
and https://phoenixnap.com/kb/how-to-install-kubernetes-on-centos
Besides the prerequisites mentioned in the above link I have also disabled swap(systemctl stop firewalld), disabled selinux policy(sudo setenforce 0) and updated Iptables settings
cat < /etc/sysctl.d/master_node_name
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
But as soon as I start kubeadm init --apiserver-advertise-address 192.168.140.48(on master node)
I get the following errors in docker for k8s_kube-controller-manager :
E0204 1 leaderelection.go:330 error retrieving resource lock kube-system/kube-controller-manager: Get https://192.168.140.48:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: Forbidden
and for docker k8s_kube-scheduler as below
E0204 1 reflector.go:123 k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: Get https://192.168.140.48:6443/api/v1/nodes?limit=500&resourceVersion=0: Forbidden
E0204 10:45:44.629865 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:250: Failed to list *v1.Pod: Get https://192.168.140.48:6443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: Forbidden
Any help would be appreciated. Thanks in advance.
Related
The docs says I can set the kernel parameters using sysctl for a docker task like so:
config {
sysctl = {
"net.core.somaxconn" = "16384"
}
}
This indeed works. But when I tried,
sysctl = {
"net.core.somaxconn" = "16384"
"net.core.rmem_default" = 134217728
"net.core.rmem_max" = 134217728
"net.core.wmem_default" = 134217728
"net.core.wmem_max" = 134217728
"vm.max_map_count" = 1000000
}
I'm getting the following error.
Sep 28, '22 19:30:22 +0530
Driver Failure
Failed to start container fa2179c3fbfe0a216e457449cfb72a78e08c0be45f10ba9596004fbfc51e5cac: API error (400):
failed to create shim task: OCI runtime create failed:
runc create failed:
unable to start container process:
error during container init:
open /proc/sys/net/core/rmem_default:
no such file or directory: unknown
I couldn't find anywhere in the docs what are the allowed parameters to set using this config.
I spent the whole day banging my head on this issue.
Please let me know if any more info is needed.
In case you are curious I'm trying to run Solana devnet validator as a container in Nomad.
open /proc/sys/net/core/rmem_default: no such file or directory: unknown
There is just no such sysctl parameter inside docker container when it is running inside network namespace. That's unrelated to nomad. See https://github.com/moby/moby/issues/42282 follow https://github.com/moby/moby/issues/30778 etc.
Docker desktop was working fine but after a reboot the docker desktop doesn't start at all.
I've tried switchDaemon, Switch to Windows container etc., but none of them starts.
I'm using WSL2 and all my containers are linux based. If I re-install then I'll lose all data and images.
wsl --list
Windows Subsystem for Linux Distributions:
Ubuntu-20.04 (Default)
docker-desktop
docker-desktop-data
Every restart attempt results with the following log entires.
open \\.\pipe\dockerProcd: The system cannot find the file specified.
A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
Log:
[2022-08-01T14:09:47.861542400Z][com.docker.backend.exe][I] (15e5c8d5) 4b373d33-BackendAPI S->C DockerDesktopElectron POST /nps (1.6312ms): OK
[2022-08-01T14:09:50.286711200Z][com.docker.backend.exe][I] (e474d45c) 4b373d33-BackendAPI S<-C DockerDesktopElectron POST /analytics/track
[2022-08-01T14:09:50.287235200Z][com.docker.backend.exe][I] (e474d45c) 4b373d33-BackendAPI S<-C DockerDesktopElectron bind: {"body":null,"event":"actionMenuSwitchWindowsDaemon"}
[2022-08-01T14:09:50.287758600Z][com.docker.backend.exe][I] (e474d45c) 4b373d33-BackendAPI S->C DockerDesktopElectron POST /analytics/track (1.0474ms): OK
[2022-08-01T14:09:50.288277300Z][com.docker.backend.exe][I] Usage statistics: actionMenuSwitchWindowsDaemon
[2022-08-01T14:09:50.288277300Z][com.docker.backend.exe][I] anonymous remaining time: 23h35m58.7117227s
[2022-08-01T14:09:51.767785100Z][IPCServer ][Info ] (3f58fd7b) acc5d626-CSharpAPI S<-C DockerDesktopElectron POST /desktop/switch-engine
[2022-08-01T14:09:51.773786400Z][IPCServer ][Info ] (3f58fd7b) acc5d626-CSharpAPI S->C DockerDesktopElectron POST /desktop/switch-engine (6ms): OK
[2022-08-01T14:09:53.056235800Z][WslKeepAlive ][Info ] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
[2022-08-01T14:09:53.059234500Z][WslKeepAlive ][Info ] wsl keep-alive stopped
[2022-08-01T14:09:53.059234500Z][WslKeepAlive ][Warning] stopped unexpectedly
[2022-08-01T14:09:53.244964600Z][vpnkit-bridge.exe][W] windows: still waiting for dns-forwarder, volume-contents, lifecycle-server, wsl2-bootstrap-expose-ports, devenv-volumes, procd, docker, debug-shell, diagnosticd, wsl2-cross-distro-service, log after 10.01406s
[2022-08-01T14:10:03.238970300Z][vpnkit-bridge.exe][W] windows: still waiting for dns-forwarder, volume-contents, lifecycle-server, wsl2-bootstrap-expose-ports, devenv-volumes, procd, docker, debug-shell, diagnosticd, wsl2-cross-distro-service, log after 20.0080657s
[2022-08-01T14:10:05.610766100Z][com.docker.backend.exe][W] 526c5971-PauseHDL /pause/events server not replying: Get "http://ipc/pause/events": open \\.\pipe\dockerProcd: The system cannot find the file specified.
[2022-08-01T14:10:13.241610100Z][vpnkit-bridge.exe][W] windows: still waiting for lifecycle-server, wsl2-bootstrap-expose-ports, devenv-volumes, procd, docker, dns-forwarder, volume-contents, wsl2-cross-distro-service, log, debug-shell, diagnosticd after 30.0105872s
I am newbie to gke.
I have python app running inside a gke pod. Pod gets evicted as out of memory after 30minutes. Total vm memory is 13GB, and as i ssh into the pod, the peak used memory before eviction is only about 3GB...
I have tried running some dummy code as defined in Dockerfile "CMD tail -f /dev/null", then connect to the pod and running scraper script manually, with success - being able to finish with peak mem usage of 9 GB.
docker file:
CMD python3 scraper.py
> Managed pods Revision Name Status Restarts Created on 1
> scraper-df68b65bf-gbhms Running 0 Sep 2, 2019, 2:59:59 PM 1
> scraper-df68b65bf-gktqw Running 0 Sep 2, 2019, 2:59:59 PM 1
> scraper-df68b65bf-z4kjb Running 0 Sep 2, 2019, 2:59:59 PM 1
> scraper-df68b65bf-wk6td Running 0 Sep 2, 2019, 3:00:45 PM 1
> scraper-df68b65bf-xqm7h Running 0 Sep 2, 2019, 3:00:45 PM
My guess is there are many instances of my app running inside of space of 13 GB in many parallel pods? How do I run single instance of my app on gke so I have all memory from vm available to it?
Do you have replica count set to one in your deployment.yaml file?
spec:
replicas: 1
In case it is HorizontalPodAutoscaler you can edit it by:
Get the HorizontalPodAutoscaler
kubectl get HorizontalPodAutoscaler
Edit it by using the edit command
kubectl edit HorizontalPodAutoscaler <pod scaler name>
And the end result of HorizontalPodAutoscaler looks like this
spec:
maxReplicas: 1
minReplicas: 1
Awesome reply #Bismal.
#Wotjas, just to add my 2 cents; you can use the Cloud Console to set the min and max values, you just need to go to:
Cloud Menu -> GKE -> Workloads -> Actions -> Scale
Set the desired values, then save.
More detailed information can be found in this document [1].
[1] https://cloud.google.com/kubernetes-engine/docs/how-to/scaling-apps
I have a problem like this. I am very new to hyper ledger fabric. I attach a shell to a running peer container in visual studio code and hit peer node start command in that terminal it gives me an error saying that,
2018-09-13 09:08:04.621 UTC [nodeCmd] status -> INFO 040 Error trying to get status from local peer: rpc error: code = Unknown desc
= access denied
status:UNKNOWN
Error: Error trying to connect to local peer: rpc error: code = Unknown desc = access denied
Can Someone help me to solve this problem? I search a lot but I was unable to find a solution to my problem. Thank You?
edit: the problem is you are using an old card with a new setup. when you create the app and then restarted the environment, it leads to the regeneration of the certificates.
I guess the problem is the FABRIC_VERSION. When you set it to hlfv1 and get bash into peer container (docker exec -it peer0.org1.example.com bash), the peer commands are working properly but when you set it to hlfv12 there are some peer commands are not working. I guess there is something wrong with the startup scripts. There is no "creds" folder exists under hlfv12/composer like hlfv1/composer by the way..
The peer node status command must be called by an administrator of the peer (someone who holds a private key matching one of the public keys in the MSP admincerts folder).
You need to run peer commands on a properly configured (by correct authentication materials) client. In my case it was CLI node.
Peer node logs:
root#bba2c96e744e:/# peer node status
2019-04-04 13:26:18.407 UTC [nodeCmd] status -> INFO 001 Error trying to get status from local peer: rpc error: code = Unknown desc = access denied
status:UNKNOWN
Error: Error trying to connect to local peer: rpc error: code = Unknown desc = access denied
root#bba2c96e744e:/# peer chaincode list --installed
Error: Bad response: 500 - access denied for [getinstalledchaincodes]: Failed verifying that proposal's creator satisfies local MSP principal during channelless check policy with policy [Admins]: [This identity is not an admin]
root#bba2c96e744e:/# peer logging getlevel system
Error: rpc error: code = Unknown desc = access denied
CLI node logs:
root#4079f33980f3:/# peer node status
status:STARTED
root#4079f33980f3:/# peer chaincode list --installed
Get installed chaincodes on peer:
Name: ccc, Version: 1.0, Path: chaincode/ccc, Id: e75e5770a29401d840b46a775854a1bb8576c6d83cf2832dce650d2a984ab29a
root#4079f33980f3:/# peer logging getlevel system
2019-04-04 13:26:02.287 UTC [cli/logging] getLevel -> INFO 001 Current log level for peer module 'system': INFO
I am trying to start up network using following command
./network_setup.sh up
After running this command I am receiving this errro
#
# Generating anchor peer update for Org2MSP
########################################################### 2017-06-05 18:16:35.716 CST [common/configtx/tool] main -> INFO 001
Loading configuration 2017-06-05 18:16:35.719 CST
[common/configtx/tool] doOutputAnchorPeersUpdate -> INFO 002
Generating anchor peer update 2017-06-05 18:16:35.719 CST
[common/configtx/tool] doOutputAnchorPeersUpdate -> INFO 003 Writing
anchor peer update
Pulling cli (hyperledger/fabric-tools:latest)...
ERROR: repository hyperledger/fabric-tools not found: does not exist or no pull access
ERROR !!!! Unable to pull the images
How I can remove this error?please help me
You can manually pull this (and any other Hyperledger Fabric image) image from DockerHub. There was a period when the fabric-tools image was not included in the helper script download-dockerimages.sh.
docker pull hyperledger/fabric-tools:x86_64-1.0.0-beta
docker tag hyperledger/fabric-tools:x86_64-1.0.0-beta hyperledger/fabric-tools
Note that it might be worth reviewing the set of published tags on DockerHub to be sure you are getting the latest.
https://hub.docker.com/u/hyperledger/
you write the command line : this :
docker pull hyperledger/fabric-tools:x86_64-1.1.0-rc1
after that
docker tag hyperledger/fabric-tools:x86_64-1.1.0-rc1 hyperledger/fabric-tools:latest