How to deploy apache server in openshift? - docker

I want to deploy the apache server on openshift. My server is running well on my local, but when I deploy it on openshift , I encounter the following issue
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80
(13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80
The possible reason might be that apache runs as a root user, and openshift doesn't allows so!
Can someone help me with this?

Port 80 is a reserved port and the default OpenShift Security Context Constraints do not allow containers to run on this port.
You should use a container image that runs on a port like 8080 or 8443.

Try the following configuration, refer Enable Container Images that Require Root
for more details.
If you run your httpd pod as default serviceaccount, you can grant anyuid scc for running as root user. You should restart your pod for taking effect your changes.
# oc get pod <your pod name> -o yaml | grep -i serviceAccountName
serviceAccountName: default
# oc adm policy add-scc-to-user anyuid -z default
# oc delete pod <your pod name>
UPDATE: Basically the 80 port will not duplicated with host 80 port unless running with hostnetwork scc.
Because container is isolated with host network using namespaces feature of kernel.
My testing evidence is as follows.
--- haproxy is already running with 80 port on the host.
# ss -ntlpo | grep -w :80
LISTEN 0 128 *:80 *:* users:(("haproxy",pid=22603,fd=6))
--- Create a project for testing
# oc new-project httpd-test
--- Create a httpd pod
# oc new-app --name httpd24 --docker-image=docker.io/httpd
--- Check the state of the pod
# oc get pod
NAME READY STATUS RESTARTS AGE
httpd24-1-hhp6g 0/1 CrashLoopBackOff 8 19m
# oc logs httpd24-1-hhp6g
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.128.1.201. Set the 'ServerName' directive globally to suppress this message
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80
(13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80
no listening sockets available, shutting down
AH00015: Unable to open logs
--- Configure "anyuid" for running the httpd pod with 80 port
# oc get pod httpd24-1-hhp6g -o yaml | grep -wi serviceaccountname
serviceAccountName: default
# oc adm policy add-scc-to-user anyuid -z default
scc "anyuid" added to: ["system:serviceaccount:httpd-test:default"]
# oc delete pod httpd24-1-hhp6g
pod "httpd24-1-hhp6g" deleted
--- Check the state of httpd pod again
# oc get pod
NAME READY STATUS RESTARTS AGE
httpd24-1-9djkv 1/1 Running 0 1m
# oc logs httpd24-1-9djkv
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.128.1.202. Set the 'ServerName' directive globally to suppress this message
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.128.1.202. Set the 'ServerName' directive globally to suppress this message
[Mon May 06 12:10:47.487909 2019] [mpm_event:notice] [pid 1:tid 139699524075584] AH00489: Apache/2.4.39 (Unix) configured -- resuming normal operations
[Mon May 06 12:10:47.488232 2019] [core:notice] [pid 1:tid 139699524075584] AH00094: Command line: 'httpd -D FOREGROUND'
I hope it help you.

I encourage you to use the existing images for the Apache Server that are based on rhel7
registry.redhat.io/rhscl/httpd-24-rhel7
These images support S2I, exposes port 8080 and can run with any UID (not root). You can use the following template: HTTPD template
EDIT: I have updated the link to the right template.

Related

Error with documentation ?? NetworkPolicy?

I walked through the code in a 3 node K8 cluster and doesn't seem like I am able to block the flow of traffic using networkpolicy on a deployment pod.
Here is the the output from the exercise.
user#myk8master:~$ kubectl get deployment,svc,networkpolicy
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP X.X.X.X <none> 443/TCP 20d
user#myk8master:~$
user#myk8master:~$
user#myk8master:~$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
user#myk8master:~$ kubectl expose deployment nginx --port=80
service/nginx exposed
user#myk8master:~$ kubectl run busybox --rm -ti --image=busybox -- /bin/sh
If you don't see a command prompt, try pressing enter.
/ # wget --spider --timeout=1 nginx
Connecting to nginx (X.X.X.X:80)
remote file exists
/ # exit
Session ended, resume using 'kubectl attach busybox -c busybox -i -t' command when the pod is running
pod "busybox" deleted
user#myk8master:~$
user#myk8master:~$
user#myk8master:~$ vi network-policy.yaml
user#myk8master:~$ cat network-policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: access-nginx
spec:
podSelector:
matchLabels:
app: nginx
ingress:
- from:
- podSelector:
matchLabels:
access: "true"
user#myk8master:~$
user#myk8master:~$
user#myk8master:~$ kubectl apply -f network-policy.yaml
networkpolicy.networking.k8s.io/access-nginx created
user#myk8master:~$
user#myk8master:~$
user#myk8master:~$ kubectl run busybox --rm -ti --image=busybox -- /bin/sh
If you don't see a command prompt, try pressing enter.
/ # wget --spider --timeout=1 nginx
Connecting to nginx (10.100.97.229:80)
remote file exists. <<<< THIS SHOULD NOT WORK
I followed all the steps as is, but it seems like I am unable to block the traffic even with networkpolicy defined.
Can someone please help and let me know if I am doing something dumb here?
As described in the documentation , restricting client access should work by using a network plugin. Because of some conflict or glitch it may not restrict the access. So try to reinstall/reconfigure.
You can also try another method like blocking them in NGINX
You can restrict Access by IP Address. NGINX can allow or deny access based on a particular IP address or the range of IP addresses of client computers. To allow or deny access, use the allow and deny directives inside the stream context or a server block:
stream {
#...
server {
listen 12345;
deny 192.168.1.2;
allow 192.168.1.1/24;
allow 2001:0db8::/32;
deny all;
}
}
Limiting the Number of TCP Connections. You can limit the number of simultaneous TCP connections from one IP address:
stream {
#...
limit_conn_zone $binary_remote_addr zone=ip_addr:10m;
#...
}
you can also limit bandwidth and ip range etc.,Using NGINX is more flexible.
Refer to the link for more information about network plugins.
My bad. I forgot to setup either one of the supported network services, as was indicated in the documentation. It worked flawlessly after that.

Unable to access to service from kubernetes master node

[root#kubemaster ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod1deployment-c8b9c74cb-hkxmq 1/1 Running 0 12s 192.168.90.1 kubeworker1 <none> <none>
[root#kubemaster ~]# kubectl logs pod1deployment-c8b9c74cb-hkxmq
2020/05/16 23:29:56 Server listening on port 8080
[root#kubemaster ~]# kubectl get service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13m <none>
pod1service ClusterIP 10.101.174.159 <none> 80/TCP 16s creator=sai
Curl on master node:
[root#kubemaster ~]# curl -m 2 -v -s http://10.101.174.159:80
* About to connect() to 10.101.174.159 port 80 (#0)
* Trying 10.101.174.159...
* Connection timed out after 2001 milliseconds
* Closing connection 0
Curl on worker node 1 is sucessfull for cluster IP ( this is the node where pod is running )
[root#kubemaster ~]# ssh kubeworker1 curl -m 2 -v -s http://10.101.174.159:80
Hello, world!
Version: 1.0.0
Hostname: pod1deployment-c8b9c74cb-hkxmq
Curl fails on other worker node as well :
[root#kubemaster ~]# ssh kubeworker2 curl -m 2 -v -s http://10.101.174.159:80
* About to connect() to 10.101.174.159 port 80 (#0)
* Trying 10.101.174.159...
* Connection timed out after 2001 milliseconds
* Closing connection 0
I was facing the same issue so this is what I did and it worked:
Brief: I am running 2 VMs for a 2 Node cluster. 1 Master Node and 1 Worker Node. A Deployment is running on the worker node. I wanted to curl from the master node so that I can get response from my application running inside a pod on the worker node. For that I deployed a service on the worker node which then exposed those set of pods inside the cluster.
Issue: After deploying the service and doing Kubectl get service, it provided me with ClusterIP of that service and a port (BTW I used NodePort instead of Cluster IP when writing the service.yaml). But when curling on that IP address and port it was just hanging and then after sometime giving timeout.
Solution: Then I tried to look at the hierarchy. First I need to contact the Node on which service is located then on the port given by the NodePort (i.e The one between 30000-32767) so first I did Kubectl get nodes -o wide to get the Internal IP address of the required Node (mine was 10.0.1.4) and then I did kubectl get service -o wide to get the port (the one between 30000-32767) and curled it. So my curl command was -> curl http://10.0.1.4:30669 and I was able to get the output.
First of all, you should always be using Service DNS instead of Cluster/dynamic IPs to access the application deployed. The service DNS would be < service-name >.< service-namespace >.svc.cluster.local, cluster.local is the default Kubernetes cluster name, if not changed otherwise.
Now coming to the service accessibility, it may be DNS issues. What you can do is try to check the kube-dns pod logs in kube-system namespace. Also, try to curl from a standalone pod. If that's working.
kubectl run --generator=run-pod/v1 bastion --image=busybox
kubectl exec -it bastion bash
curl -vvv pod1service.default.svc.cluster.local
If not the further questions would be, where is the cluster and how it was created?

Expose ngnix ingress controller as Daemon-set

I am trying to install and use the nginx-ingress to expose services running under kubernetes cluster and I am following this instructions.
In step 4 it's noted that :
If you created a daemonset, ports 80 and 443 of the Ingress controller container are mapped to the same ports of the node where the container is running. To access the Ingress controller, use those ports and an IP address of any node of the cluster where the Ingress controller is running.
That mean that the daemonset will be listening on ports 80 and 443 to forward the incoming traffic to the service mapped by an ingress.yaml config file.
But after running the instruction 3.2 kubectl apply -f daemon-set/nginx-ingress.yaml the daemon-set was created but nothing was listening on 80 or 443 in all the cluster's nodes.
Is there a problem with the install instruction or am I missing something there.
It is not the typical Listen which you can get from the output of netstat. It is "listened" by iptables. The following is the iptables rules for the ingress controller on my cluster node.
-A CNI-DN-0320b4db24e84e16999fd -s 10.233.88.110/32 -p tcp -m tcp --dport 80 -j CNI-HOSTPORT-SETMARK
-A CNI-DN-0320b4db24e84e16999fd -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.233.88.110:80
-A CNI-DN-0320b4db24e84e16999fd -s 10.233.88.110/32 -p tcp -m tcp --dport 443 -j CNI-HOSTPORT-SETMARK
-A CNI-DN-0320b4db24e84e16999fd -p tcp -m tcp --dport 443 -j DNAT --to-destination 10.233.88.110:443
10.233.88.110 is the ip address of the ingress controller running on that node.
$ kubectl get pod -n ingress-nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ingress-nginx-controller-5rh26 1/1 Running 1 77d 10.233.83.110 k8s-master3 <none> <none>
ingress-nginx-controller-9nnwl 1/1 Running 1 77d 10.233.88.110 k8s-master2 <none> <none>
ingress-nginx-controller-ckkb2 1/1 Running 1 77d 10.233.68.111 k8s-master1 <none> <none>
Edit
When a request comes to port 80/443, the iptables will apply DNAT rule to this request which modify the destination IP to the ip address of the ingress controller. The actual listen is inside the ingress controller container.
As mentioned by Hang du (+1)
According to the default setting in your cluster for --proxy-mode:
Which proxy mode to use: 'userspace' (older) or 'iptables' (faster) or 'ipvs'. If blank, use the best-available proxy (currently iptables). If the iptables proxy is selected, regardless of how, but the system's kernel or iptables versions are insufficient, this always falls back to the userspace proxy.
Assuming you have nginx-ingress-controller-xxx controller in kube-system namespace you can use this command to verify this parameters on your side:
sudo iptables-save | grep $(kubectl get pods -n kube-system -o wide | grep nginx-ingress-controller-xxx | awk '{print $6}')
More information about iptables/netfilter you can find here and here.
Additional resources:
Network Plugins
Introducing kube-iptables-tailer
Update:
HostNetwork - Controls whether the pod may use the node network namespace. Doing so gives the pod access to the loopback device, services listening on localhost, and could be used to snoop on network activity of other pods on the same node.
So in addition to the above answer:
In order to bind ports 80 and 443 directly to Kubernetes nodes' network interfaces you can set-up hostNetwork: true (but it's not recommended):
Enabling this option exposes every system daemon to the NGINX Ingress controller on any network interface, including the host's loopback. Please evaluate the impact this may have on the security of your system carefully.

frequently crashing of pod in openshift

Getting this in log while deploying image in openshift:
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.13. Set the 'ServerName' directive globally to suppress this message
(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80
(13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80
no listening sockets available, shutting down
AH00015: Unable to open logs
Dockerfile:
FROM httpd:2.4
RUN echo "hello app" > /usr/local/apache2/htdocs/hello.html
also getting the error if i use EXPOSE 80
Ports up to 1024 are so called privileged ports this means that in order to bind to them, the user has to have root capabilities.
In your case, you are trying have your service listen on port 80, which is in that privileged port range.
By default openshift is not running any containers inside the Pods as root.
You will either have to adjust the user as which it runs or have it listen on a different port.

TaskWarrior Port not Opening Externally

I run a Debian 9 server (recently upgraded from Debian 8 where similar problems occurred). I have a task warrior instance up and running and it works internally, I am unable to sync to it externally however. I run a UFW firewall instance.
/var/taskd/config:
confirmation=1
extensions=/usr/local/libexec/taskd
ip.log=on
log=/var/taskd/taskd.log
pid.file=/var/taskd/taskd.pid
queue.size=10
request.limit=1048576
root=/var/taskd
server=hub.home:53589
trust=strict
verbose=1
client.cert=/var/taskd/client.cert.pem
client.key=/var/taskd/client.key.pem
server.cert=/var/taskd/server.cert.pem
server.key=/var/taskd/server.key.pem
server.crl=/var/taskd/server.crl.pem
ca.cert=/var/taskd/ca.cert.pem
/etc/systemd/system/taskd.service
[Unit]
Description=Secure server providing multi-user, multi-client access to Taskwarrior data
Requires=network.target
After=network.target
Documentation=http://taskwarrior.org/docs/#taskd
[Service]
ExecStart=/usr/local/bin/taskd server --data /var/taskd
Type=simple
User=<myusername>
Group=<mygroupname>
WorkingDirectory=/var/taskd
PrivateTmp=true
InaccessibleDirectories=/home /root /boot /opt /mnt /media
ReadOnlyDirectories=/etc /usr
[Install]
WantedBy=multi-user.target
systemctl status taskd.service:
● taskd.service - Secure server providing multi-user, multi-client access to Taskwarrior data
Loaded: loaded (/etc/systemd/system/taskd.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2017-07-04 10:21:42 BST; 28min ago
Docs: http://taskwarrior.org/docs/#taskd
Main PID: 3964 (taskd)
Tasks: 1 (limit: 4915)
CGroup: /system.slice/taskd.service
└─3964 /usr/local/bin/taskd server --data /var/taskd
sufo ufw status:
Status: active
To Action From
-- ------ ----
...
53589 ALLOW Anywhere
53589 (v6) ALLOW Anywhere (v6)
...
nmap localhost -p 53589 -Pn (from host)
...
PORT STATE SERVICE
53589/tcp closed unknown
...
nmap hub.home -p 53589 -Pn (from host)
...
PORT STATE SERVICE
53589/tcp open unknown
...
nmap hub.home -p 53589 -Pn (from client)
...
PORT STATE SERVICE
53589/tcp closed unknown
...
taskd server --debug --debug.tls=2
s: INFO Client certificate will be verified.
s: INFO IPv4: 127.0.1.1
s: INFO Server listening.
The sync works internally but not externally.
Many thanks
I ran into the same issue. For me, ensuring /etc/hosts was set with the externally facing IP addresses and setting the server taskd config variable to the fqdn with port, then setting the family=IPv4 worked (it wouldn't work with IPv6 for me). The only thing I don't see is the family in your config...
Though in your config it looks like the INFO IPv4: 127.0.1.1 doesn't match the comment you made about taskd.server=192.*. That looks like a localhost loopback.
Maybe if you edit /etc/hosts with the fully qualified domain name & hostname and specify the IP address and IP family in the config it will give taskwarrior the info it needs to bind to the right external IP and port and permit the use of the self signed cert?
When I run with the debug server, I get:
taskd#(host):~$ taskd server --debug --debug.tls=2
s: INFO Client certificate will be verified.
s: INFO IPv4: (my external IPv4 address)
s: INFO Server listening.

Resources