Hope doing good all.
Env: centos 7.3.1611, kubernetes : 1.5, docker 1.12
Problem 1 : Extended jboss docker not working but docker image created successfully
POD gets an error see below, step 7.
Problem 2 : Once problem #1 fixed then i wish to upload to docker hub: https://hub.docker.com/
how can i upload steps please if possible.
1) pull
docker pull jboss/wildfly
2) vi Dockerfile
FROM jboss/wildfly
RUN /opt/jboss/wildfly/bin/add-user.sh admin admin123$ --silent
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0"]
3) Extend docker image
docker build --tag=nbasetty/wildfly-server .
4) [root#centos7 custom-jboss]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nbasetty/wildfly-server latest c1fbb87faffd 43 minutes ago 583.8 MB
docker.io/httpd latest e0645af13ada 2 weeks ago 177.5 MB
5)vi jboss-wildfly-rc-service-custom.yaml
apiVersion: v1
kind: Service
metadata:
name: wildfly-service
spec:
externalIPs:
- 10.0.2.15
selector:
app: wildfly-rc-pod
ports:
- name: web
port: 8080
#- name: admin-console
# port: 9990
type: LoadBalancer
---
apiVersion: v1
kind: ReplicationController
metadata:
name: wildfly-rc
spec:
replicas: 2
template:
metadata:
labels:
app: wildfly-rc-pod
spec:
containers:
- name: wildfly
image: nbasetty/wildfly-server
ports:
- containerPort: 8080
#- containerPort: 9990
6) kubectl create -f jboss-wildfly-rc-service-custom.yaml
7) [root#centos7 jboss]# kubectl get pods
NAME READY STATUS RESTARTS AGE
mysql-pvc-pod 1/1 Running 6 2d
wildfly-rc-d0k3h 0/1 ImagePullBackOff 0 23m
wildfly-rc-hgsfj 0/1 ImagePullBackOff 0 23m
[root#centos7 jboss]# kubectl logs wildfly-rc-d0k3h
Error from server (BadRequest): container "wildfly" in pod
"wildfly-rc-d0k3h" is waiting to start:
trying and failing to pull image
Glad you have found a way to make it working. here is step I followed.
I labeled node-01 as 'dbserver: mysql'
create the docker image in node-01
created this pod, it worked.
apiVersion: v1 kind: ReplicationController metadata: name: wildfly-rc spec: replicas: 2 template:
metadata:
labels:
app: wildfly-rc-pod
spec:
containers:
- name: wildfly
image: nbasetty/wildfly-server
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
nodeSelector:
dbserver: mysql
Re-creating the issue:
docker pull jboss/wildfly
mkdir jw
cd jw
echo 'FROM jboss/wildfly
RUN /opt/jboss/wildfly/bin/add-user.sh admin admin123$ --silent
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0"]' | tee Dockerfile
docker build --tag=docker.io/surajd/wildfly-server .
See the images available:
# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/surajd/wildfly-server latest 10e96902ea12 11 seconds ago 583.8 MB
Create a config that works:
echo '
apiVersion: v1
kind: Service
metadata:
name: wildfly
spec:
selector:
app: wildfly
ports:
- name: web
port: 8080
type: LoadBalancer
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: wildfly
spec:
replicas: 2
template:
metadata:
labels:
app: wildfly
spec:
containers:
- name: wildfly
image: docker.io/surajd/wildfly-server
imagePullPolicy: Never
ports:
- containerPort: 8080
' | tee config.yaml
kubectl create -f config.yaml
Notice the field imagePullPolicy: Never, this helps you use the image available on the node(the image we built using docker build). This works on single node cluster but may or may not work on multiple node cluster. So not recommended to put that value, but since we are doing experiment on single node cluster we can set it to Never. Always set it to imagePullPolicy: Always. So that whenever the pod is scheduled the image will be pulled from registry. Read about imagePullPolicy and some config related tips.
Now to pull the image from registry the image should be on registry, so to answer your question of pushing it to docker hub run command:
docker push docker.io/surajd/wildfly-server
So in the above example replace surajd with your docker registry username.
Here are steps I used to do setup of single node cluster on CentOS:
My machine version:
$ cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)
Here is what I have done:
Setup single node k8s cluster on CentOS as follows (src1 & src2):
yum update -y
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y docker kubelet kubeadm kubectl kubernetes-cni
systemctl enable docker && systemctl start docker
systemctl enable kubelet && systemctl start kubelet
sysctl net.bridge.bridge-nf-call-iptables=1
sysctl net.bridge.bridge-nf-call-ip6tables=1
kubeadm init
cp /etc/kubernetes/admin.conf $HOME/
chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
kubectl taint nodes --all node-role.kubernetes.io/master-
Now k8s version:
# kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Related
I am using WSL2 on Windows.
I made a flask service in minikube in WSL2 and a docker container in WSL2 separately.
I want to make a request to flask service in minikube from container in WSL2.
Steps to create a flask service
flask_service.py (only last line, service is running on /rss)
if __name__ == '__main__':
app.run(debug=False, host='0.0.0.0', port=8001)
Dockerfile
FROM python:3
COPY flask_service.py ./
WORKDIR .
RUN apt-get update
RUN apt install nano
RUN pip install numpy pandas Flask connectorx sqlalchemy pymysql jsonpickle
EXPOSE 8001
ENTRYPOINT ["python"]
CMD ["flask_service.py"]
minikube setting
minikube start --mount --mount-string="/home/sjw/kube:/home/sjw/kube"
kubectl proxy --address 0.0.0.0 --port 30001
minikube tunnel
getdb service menifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: getdbdp
spec:
replicas: 1
selector:
matchLabels:
app: getdb
template:
metadata:
labels:
app: getdb
spec:
containers:
- name: getdb
image: "desg2022/01getdb:v02"
env:
- name: "PORT"
value: "8001"
---
apiVersion: v1
kind: Service
metadata:
name: getdb-lb
spec:
type: LoadBalancer
selector:
app: getdb
ports:
- protocol: TCP
port: 8080
targetPort: 8001
First, local access(from windows) to the flask service was possible with the address below.
http://localhost:30001/api/v1/namespaces/default/services/http:getdb-lb:8080/proxy/rss
Second, when connecting in the same minikube
http://localhost:8001/rss
My question. I created a docker container in wsl2 as follows.
docker-compose.yaml (image is ubunut with only installed python and pip )
version: '2.3'
services:
master:
container_name: gputest1
image : desg2022/ubuntu:v01
stdin_open: true # docker run -i
tty: true # docker run -t
ports:
- 8080:8888
command:
"/bin/bash"
extra_hosts:
- "host.docker.internal:host-gateway"
ipc: 'host'
Inside this container I want to access getdb in minikube, what address should i put in?
minikube start fails with error libmachine: Error dialing TCP: dial tcp 10.43.239.243:49167: connect: no route to host when run in the below setup:
k8s cluster (with containerd as container runtime) with 2 pods: one with docker client container, second with docker daemon container.
dind daemon resources:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: dind
spec:
selector:
matchLabels:
app: dind
serviceName: "dind"
template:
metadata:
labels:
app: dind
spec:
containers:
- name: dind-daemon
image: docker:20.10.17-dind
securityContext:
privileged: true
env:
- name: DOCKER_TLS_CERTDIR
value: ""
apiVersion: v1
kind: Service
metadata:
name: dind
spec:
selector:
app: dind
type: ClusterIP
ports:
- name: daemon
protocol: TCP
port: 2375
targetPort: 2375
dind client resources:
apiVersion: v1
kind: Pod
metadata:
name: "docker-client"
labels:
app: "docker-client"
spec:
containers:
- name: docker-client
image: "docker:latest"
env:
- name: DOCKER_HOST
value: "tcp://dind:2375"
minikube start runs inside docker client container
How to debug this issue and what might be the reason for it? 10.43.239.243 is ip of ClusterIP dind service. The error happens after lines in minikube log:
I0804 09:46:35.049413 222 main.go:134] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I tried to make the same experiment when both containers run without kubernetes (using docker daemon). In that case, both were using the same docker network, daemon container started with dind network alias and minikube start succeeded.
Below are the used commands:
docker daemon container:
docker run --name dind -d --privileged --network dind --network-alias dind -e DOCKER_TLS_CERTDIR="" docker:dind
docker client container:
docker run --name dind-client -it --network dind -e DOCKER_HOST="tcp://dind:2375"docker sh
/ # wget https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
/ # mv minikube-linux-amd64 minikube
/ # chmod +x minikube
/ # ./minikube start --force
...
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
/ # ./minikube kubectl -- run --image=hello-world
/ # ./minikube kubectl -- logs pod/hello
Hello from Docker!
I've built a custom-spark docker image with the following dependencies:
Python 3.6.9
Pip 1.18
Java OpenJDK 64-Bit Server VM, 1.8.0_212
Hadoop 3.2
Scala 2.13.0
Spark 3.0.3
where I pushed to ducker hub: https://hub.docker.com/r/redaer7/custom-spark
Dockerfile,spark-master and spark-worker files are stored under: https://github.com/redaER7/Custom-Spark
I verify /spark-master and /spark-worker works well when creating a container linked to the previous image:
docker run -it -d --name spark_1 redaer7/custom-spark:1.0 bash
docker exec -it $CONTAINER_ID /bin/bash
My issue is when I try to build a K8s cluster from previous image with following yaml file for the spark master pod:
kubectl create namespace sparkspace
kubectl -n sparkspace create -f ./spark-master-deployment.yaml
#yaml file
kind: Deployment
apiVersion: apps/v1
metadata:
name: spark-master
spec:
replicas: 1 # should always be one
selector:
matchLabels:
component: spark-master
template:
metadata:
labels:
component: spark-master
spec:
containers:
- name: spark-master
image: redaer7/custom-spark:1.0
imagePullPolicy: IfNotPresent
command: ["/spark-master"]
ports:
- containerPort: 7077
- containerPort: 8080
resources:
# limits:
# cpu: 1
# memory: 1G
requests:
cpu: 1 #100m
memory: 1G
I get CrashLoopBackOff when viewing pod with kubectl -n sparkspace get pods
When inspecting with kubectl -n sparkspace describe pod $Pod_Name
Any clue about that First warning ? thank you
I simply solved it by re-pulling the image :
imagePullPolicy: Always
Because I edited the Docker Image locally and I haven't changed the following in the config file:
imagePullPolicy: IfNotPresent
Then, I pushed it into Dockerhub for later deployment
I am trying to set up Spark on Kubernetes on Mac. I have followed this tutorial web pages and it looks so straightforward for me to understand.
Below is the Dockerfile.
# base image
FROM java:openjdk-8-jdk
# define spark and hadoop versions
ENV SPARK_VERSION=3.0.0
ENV HADOOP_VERSION=3.3.0
# download and install hadoop
RUN mkdir -p /opt && \
cd /opt && \
curl http://archive.apache.org/dist/hadoop/common/hadoop-${HADOOP_VERSION}/hadoop-${HADOOP_VERSION}.tar.gz | \
tar -zx hadoop-${HADOOP_VERSION}/lib/native && \
ln -s hadoop-${HADOOP_VERSION} hadoop && \
echo Hadoop ${HADOOP_VERSION} native libraries installed in /opt/hadoop/lib/native
# download and install spark
RUN mkdir -p /opt && \
cd /opt && \
curl http://archive.apache.org/dist/spark/spark-${SPARK_VERSION}/spark-${SPARK_VERSION}-bin-hadoop2.7.tgz | \
tar -zx && \
ln -s spark-${SPARK_VERSION}-bin-hadoop2.7 spark && \
echo Spark ${SPARK_VERSION} installed in /opt
# add scripts and update spark default config
ADD common.sh spark-master spark-worker /
ADD spark-defaults.conf /opt/spark/conf/spark-defaults.conf
ENV PATH $PATH:/opt/spark/bin
After building the Docker image I ran the following commands but the pod doesn't start.
$ kubectl create -f ./kubernetes/spark-master-deployment.yaml
$ kubectl create -f ./kubernetes/spark-master-service.yaml
spark-master-deployment.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: spark-master
spec:
replicas: 1
selector:
matchLabels:
component: spark-master
template:
metadata:
labels:
component: spark-master
spec:
containers:
- name: spark-master
image: spark-hadoop:3.0.0
command: ["/spark-master"]
ports:
- containerPort: 7077
- containerPort: 8080
resources:
requests:
cpu: 100m
spark-master-service.yaml
kind: Service
apiVersion: v1
metadata:
name: spark-master
spec:
ports:
- name: webui
port: 8080
targetPort: 8080
- name: spark
port: 7077
targetPort: 7077
selector:
component: spark-master
To trace the problem, I ran the kubectl describe... command and got the following result.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 45s default-scheduler Successfully assigned default/spark-master-fc7c95485-zn6wf to minikube
Normal Pulled 21s (x3 over 44s) kubelet, minikube Container image "spark-hadoop:3.0.0" already present on machine
Normal Created 21s (x3 over 44s) kubelet, minikube Created container spark-master
Warning Failed 21s (x3 over 43s) kubelet, minikube Error: failed to start container "spark-master": Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"/spark-master\": stat /spark-master: no such file or directory": unknown
Warning BackOff 8s (x3 over 42s) kubelet, minikube Back-off restarting failed container
It seems that the container didn't start but I didn't figure out why the pod does not start correctly even though I only followed the instruction on the web page.
Below is the GitHub URL that the web page gives me a guide for configuring Spark on Kubernetes.
https://github.com/testdrivenio/spark-kubernetes
I assume you are using Minikube.
For minikube make the following changes:
Eval the docker env using: eval $(minikube docker-env)
Build the docker image: docket build -t my-image
Set the image name as only "my-image" in pod specification in your yaml file.
Set the imagePullPolicy to Never in you yaml file. Here is the example:
apiVersion:
kind:
metadata:
spec:
template:
metadata:
labels:
app: my-image
spec:
containers:
- name: my-image
image: "my-image"
imagePullPolicy: Never
It seems like you didn’t copy the scripts developed by the blogger which are in this project, where in the image there is this command ADD common.sh spark-master spark-worker /, so your image misses the script you need to run the master (you will have the same problem with the workers), you can clone the project then build the image, or use the image published by the blogger mjhea0/spark-hadoop.
Here you are trying to setup a spark standalone cluster on Kubernetes, but you can use Kubernetes itself as a spark manager, where spark in the release 3.1.0 announced that Kubernetes officially become a spark cluster manager (it was experimental since the release 2.3), here is the official documentation, you can also use spark-on-k8s-operator, developed by Google, to submit the jobs and manage them on your Kubernetes cluster.
Im trying to follow the get started docker's tutorials, but I get stuck when you have to work with kuberetes. I'm using microk8s to create the clusters.
My Dockerfile:
FROM node:6.11.5WORKDIR /usr/src/app
COPY package.json .
RUN npm install
COPY . .
CMD [ "npm", "start" ]
My bb.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: bb-demo
namespace: default
spec:
replicas: 1
selector:
matchLabels:
bb: web
template:
metadata:
labels:
bb: web
spec:
containers:
- name: bb-site
image: bulletinboard:1.0
---
apiVersion: v1
kind: Service
metadata:
name: bb-entrypoint
namespace: default
spec:
type: NodePort
selector:
bb: web
ports:
- port: 8080
targetPort: 8080
nodePort: 30001
I create the image with
docker image build -t bulletinboard:1.0 .
And I create the pod and the service with:
microk8s.kubectl apply -f bb.yaml
The pod is created, but, when I look for the state of my pods with
microk8s.kubectl get all
It says:
NAME READY STATUS RESTARTS AGE
pod/bb-demo-7ffb568776-6njfg 0/1 ImagePullBackOff 0 11m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/bb-entrypoint NodePort 10.152.183.2 <none> 8080:30001/TCP 11m
service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 4d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/bb-demo 0/1 1 0 11m
NAME DESIRED CURRENT READY AGE
replicaset.apps/bb-demo-7ffb568776 1 1 0 11m
Also, when I look for it at the kubernetes dashboard it says:
Failed to pull image "bulletinboard:1.0": rpc error: code = Unknown desc = failed to resolve image "docker.io/library/bulletinboard:1.0": no available registry endpoint: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
Q: Why do I get this error? Im just following the tutorial without skipping anything.
Im already logged with docker.
You need to push this locally built image to the Docker Hub registry. For that, you need to create a Docker Hub account if you do not have one already.
Once you do that, you need to login to Docker Hub from your command line.
docker login
Tag your image so it goes to your Docker Hub repository.
docker tag bulletinboard:1.0 <your docker hub user>/bulletinboard:1.0
Push your image to Docker Hub
docker push <your docker hub user>/bulletinboard:1.0
Update the yaml file to reflect the new image repo on Docker Hub.
spec:
containers:
- name: bb-site
image: <your docker hub user>/bulletinboard:1.0
re-apply the yaml file
microk8s.kubectl apply -f bb.yaml
You can host a local registry server if you do not wish to use Docker hub.
Start a local registry server:
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Tag your image:
sudo docker tag bulletinboard:1.0 localhost:5000/bulletinboard
Push it to a local registry:
sudo docker push localhost:5000/bulletinboard
Change the yaml file:
spec:
containers:
- name: bb-site
image: localhost:5000/bulletinboard
Start deployment
kubectl apply -f bb.yaml
A suggested solution is to add imagePullPolicy: Never to your Deployment as per the answer here but this didn't work for me, so I followed this guide since I was working in local development.