Kubernetes deployments in IBM Cloud fails for me - docker

I try and deploy an app in a kubernetes cluster following these instructions
https://cloud.ibm.com/docs/containers?topic=containers-cs_apps_tutorial#cs_apps_tutorial
Then I make a build following the instructions with ibmcloud cr build -t registry..bluemix.net//hello-world:1 .
Output looks good except a securitywarning
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.
But as this was just a test I did not worry.
At the next stage running this command following instructions
kubectl run hello-world-deployment --image=registry..bluemix.net//hello-world:1
I get the following error
error: failed to discover supported resources: Get http://localhost:8080/apis/apps/v1?timeout=32s: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.
As you see in the message it looks like it is trying to do something to my local PC rather than the IBMCloud. What have I missed to do?

As #N Fritze mentioned in the comment, in order to organize access to Kubernetes cluster you might require to set KUBECONFIG environment variable which holds a list of kubeconfig files needed to provide sufficient information about authentication method in API server.
Find more information about managing Kubernetes Service in official IBM Cloud documentation. As issue has been already solved, answer composed for any further contributors research.

Related

docker: Error response from daemon: manifest for gcr.io/google_containers/hyperkube-amd64:v1.24.2 not found

Following this guide:
https://jamesdefabia.github.io/docs/getting-started-guides/docker/
and both
export K8S_VERSION=$(curl -sS https://storage.googleapis.com/kubernetes-release/release/stable.txt)
and
export K8S_VERSION=$(curl -sS https://storage.googleapis.com/kubernetes-release/release/latest.txt)
fail at the docker run stage with a not found error. E.g:
docker: Error response from daemon: manifest for gcr.io/google_containers/hyperkube-amd64:v1.24.2 not found: manifest unknown: Failed to fetch "v1.24.2" from request "/v2/google_containers/hyperkube-amd64/manifests/v1.24.2".
Any suggestions?
Check the repo of hyperkube and use an available tag:
https://console.cloud.google.com/gcr/images/google-containers/global/hyperkube-amd64
As mentioned by #zerkms #vladtkachuk that google hyperkube is not available anymore. As mentioned in the document:
Hyperkube, an all-in-one binary for Kubernetes components, is now
deprecated and will not be built by the Kubernetes project going
forward.Several, older beta API versions are deprecated in 1.19 and
will be removed in version 1.22. We will provide a follow-on update
since this means 1.22 will likely end up being a breaking release for
many end users.
Setting up a local Kubernetes environment as your development environment is the recommended option, no matter your situation, because this setup can create a safe and agile application-deployment process.
Fortunately, there are multiple platforms that you can try out to run Kubernetes locally, and they are all open source and available under the Apache 2.0 license.
Minikube has the primary goals of being the best tool for local Kubernetes application development, and to support all Kubernetes features that fit.
kind runs local Kubernetes clusters using Docker container "nodes."

Cloud Build docker image unable to write files locally - fail to open file... permission denied

Using Service Account credentials, I am successful at running Cloud Build to spin up gsutil, move files from gs into the instance, then copy them back out. All is good.
One of the Cloud Build steps successfully loads a docker image from outside source, it loads fine and reports its own help info successfully. But when run, it fails with the error message:
"fail to open file "..intermediary_work_product_file." permission denied.
For the app I'm running in this step, this error is typically produced when the file cannot be written to its default location. I've set dir = "/workspace" to confirm the default.
So how do I grant read/write permissions to the app running inside a Cloud Build step to write its own intermediary work product to the local folders? The Cloud Build itself is running fine using Service Account credentials. Have tried adding more permissions including with Storage, Cloud Run, Compute Engine, App Engine admin roles. But the same error.
I assume that the credentials used to create the instance are passed to the run time. Have dug deep into the GCP CloudBuild documentation and examples, but found no answers.
There must be something fundamental I'm overlooking.
This problem was resolved by changing the Dockerfile USER as suggested by #PRAJINPRAKASH in this helpful answer https://stackoverflow.com/a/62218160/4882696
Tried to solve this by systematically testing GCP services and role permissions. All Service Account credentials tested were able to create container instances, and run gcloud or gutil fine. However, the custom apps created containers but failed when doing local write even to the default shared /workspace.
When using GCP Cloud Build, local read/write permissions do not "pass through" from the default service account to the runtime instance. The documentation is not clear on this.
I encountered this problem while building my react app with Cloud Build, i wasn't able to install node-sass globally...
So i tried to chown recursively the /usr directory to nobody:nogroup, and it worked. I have no idea if there is another better solution to this, but, the important thing, it fixed my issue.
I had a similar problem; the snippet I was looking for in my cloudbuild manifest was:
- id: perms
name: "gcr.io/cloud-builders/git"
entrypoint: "chmod"
args: ["-v", "-R", "a+rw", "."]
dir: "path/to/some/dir"

Using Transfer for on-premises option to transfer files

[google-cloud-storage]I am trying to copy files from Linux directory to GCP bucket using "Transfer for on-premises" option. I’ve installed docker script on Linux and GCP bucket is created. I now need to run Docker Run command to copy files. My question is how do I specify the source & target places in the docker command. For example;
Sudo docker run –source –target --hostname=$(hostname) --agent-id-prefix=ID123456789
The short answer is you can't supply a source/destination to this command, because its purpose is not to transfer the data. This command starts the agents for the service - agents are always-running processes that help you move data.
After starting agents that have access to your files, you issue a copy command in the Cloud Console, where you can specify a source directory and target bucket+prefix. When you do this, the service will contact the agents and use them to push the data to Google Cloud in parallel, for faster transfers. See the following links for more details:
Overview of how Transfer Service for on-premises data works
Setting up the service, and how to submit a transfer job

Not able to connect to a container(Created via Rest API) in Kubernetes

I am creating a docker container ( using docker run) in a kubernetes Environment by invoking a rest API.
I have mounted the docker.sock of the host machine and i am building an image and running that image from RESTAPI..
Now i need to connect to this container from some other container which is actually started by Kubectl from deployment.yml file.
But when used kubeclt describe pod (Pod name), my container created using Rest API is not there.. So where is this container running and how can i connect to it from some other container ?
Are you running the container in the same namespace as namespace with deployment.yml? One of the option to check that would be to run -
kubectl get pods --all-namespaces
If you are not able to find the docker container there than I would suggest performing below steps -
docker ps -a {verify running docker status}
Ensuring that while mounting docker.sock there are no permission errors
If there are permission errors, escalate privileges to the appropriate level
To answer the second question, connection between two containers should be possible by referencing cluster DNS in below format -
"<servicename>.<namespacename>.svc.cluster.local"
I would also request you to detail steps, codes and errors(if there are any) for me to better answer the question.
You probably shouldn't be directly accessing the Docker API from anywhere in Kubernetes. Kubernetes will be totally unaware of anything you manually docker run (or equivalent) and as you note normal administrative calls like kubectl get pods won't see it; the CPU and memory used by the pod won't be known about by the node interface and this could cause a node to become over utilized. The Kubernetes network environment is also pretty complicated, and unless you know the details of your specific CNI provider it'll be hard to make your container accessible at all, much less from a pod running on a different node.
A process running in a pod can access the Kubernetes API directly, though. That page notes that all of the official client libraries are aware of the conventions this uses. This means that you should be able to directly create a Job that launches your target pod, and a Service that connects to it, and get the normal Kubernetes features around this. (For example, servicename.namespacename.svc.cluster.local is a valid DNS name that reaches any Pod connected to the Service.)
You should also consider whether you actually need this sort of interface. For many applications, it will work just as well to deploy some sort of message-queue system (e.g., RabbitMQ) and then launch a pool of workers that connects to it. You can control the size of the worker queue using a Deployment. This is easier to develop since it avoids a hard dependency on Kubernetes, and easier to manage since it prevents a flood of dynamic jobs from overwhelming your cluster.

Hyperledger fabric v1 network of physical layers

How to setup hyperledger fabric v1 network on physical peers instead of docker peers?
You can take a look at https://github.com/yacovm/fabricDeployment
It deploys automatically to linux virtual machines / physical hosts:
A few peers, according to your configuration
A solo orderer
Everything with TLS
Creates a channel and installs and invokes example02 chaincode for sanity testing
The docker containers provide a mechanism to take care of a lot of configs that happens behind the curtain, and that is the preferred way. If you choose to use fabric directly over a server without Docker, one way would be to build the binaries yourself via the make command and take a look at the 1) shell script in getting started and the 2) Docker-compose file (in http://hyperledger-fabric.readthedocs.io/en/latest/build_network.html) to deconstruct the steps and configs, but this will be a pretty involved process.

Resources