SCADA LTS - HTTP Status 404 - docker

After starting a SCADA LTS Docker container as suggested on https://github.com/SCADA-LTS/Scada-LTS with the following command:
docker run -it -e DOCKER_HOST_IP=docker-machine ip-p 81:8080 scadalts/scadalts /root/start.sh
...The container works well for some time and then suddenly a "HTTP Status 404" error is shown, like the following:
http://[IP]/ScadaBR/
HTTP Status 404 - /ScadaBR/
type Status report
message /ScadaBR/
description The requested resource is not available.
Apache Tomcat/7.0.85
Where [IP] is the default Docker IP address and port, most of the times is localhost:81.
Any idea how to solve it?
Thank you in advance!

TL;DR
After some time running the MySQLservice dies. Is necessary to restart it manually with this:
docker exec scada service mysql restart
docker exec scada killall tail
DETAILED REPORT
When the error is shown, you can check if all the services are running on the container (in this case named 'scada'):
>docker exec scada ps -A
PID TTY TIME CMD
1 ? 00:00:00 start.sh
790 ? 01:00:22 java
791 ? 00:01:27 tail
858 ? 00:00:00 ps
As can be seen, no MySQL service is running. This explains why Tomcat is running but SCADA-LTS don't.
You can restart MySQL service inside the container with:
docker exec scada service mysql restart
After that SCADA-LTS is still down and you have to restart tomcat which can be done in this way:
docker exec scada killall tail
After a minute or less, all the services are running:
>docker exec scada ps -A
PID TTY TIME CMD
1 ? 00:00:00 start.sh
43 ? 00:00:00 mysqld_safe
398 ? 00:00:00 mysqld
481 ? 00:00:31 java
482 ? 00:00:00 sleep
618 ? 00:00:00 ps
Now SCADA-LTS is running!

Related

how to change configuration of freeradius-server in docker container?

I'm trying to bulid a freeradius-server using docker and pull a image "freeradius/freeradius server". The first time I used given command
docker run --name my-radius -t -d freeradius/freeradius-server -X
to build a containner adn successfully start debug mode. But I don't know how to quit so I used ctrl+c to stop the containner. And then I used commands below to get in the containner and want to start debug mode again so that I can change configuration or parameters.
docker start my-radius
docker exec -it my-radius /bin/bash
I got in the containner and used freeradius -X but failed. It present
Failed binding to auth address 127.0.0.1 port 18120 bound to server inner-tunnel: Address already in use
/etc/freeradius/sites-enabled/inner-tunnel[33]: Error binding to port for 127.0.0.1 port 18120
I used Google to look for solutions but failed. I guess it means the radius-server started automatically so that the address 127.0.0.1 and port 18120 were used. But I don't know how to stop it in the containner .
The official FreeRADIUS docker image will start FreeRADIUS when the container starts. This means that if you start the container and then exec a shell into it, FreeRADIUS will already be running.
The container will exit as soon as the FreeRADIUS process stops, meaning it is not possible to start the container in this way, stop FreeRADIUS running, and then continue to use the container.
In this situation, trying to run FreeRADIUS a second time in another shell will fail because the ports are already open, as you have discovered.
This can be see thus:
$ docker run --name my-radius -d freeradius/freeradius-server
106cdbc81e8e5c0257f22bebad221ed1b4ba0a14f40ce1e4110ec388380c7e62
$ docker exec -it my-radius /bin/bash
root#106cdbc81e8e:/# ps -ef
UID PID PPID C STIME TTY TIME CMD
freerad 1 0 1 23:10 ? 00:00:00 freeradius -f
root 12 0 1 23:10 pts/0 00:00:00 /bin/bash
root 22 12 0 23:10 pts/0 00:00:00 ps -ef
root#106cdbc81e8e:/# exit
exit
$ docker stop my-radius
my-radius
$ docker rm my-radius
my-radius
$
To be able to run FreeRADIUS yourself you can do two things. Firstly, don't start the container in the background, but start it in the foreground with FreeRADIUS in debug mode. The docker entrypoint will let you pass arguments directly to the daemon. This is the easiest way if you don't need to actually do anything inside the container, but just run FreeRADIUS in debug mode:
$ docker run --name my-radius -it freeradius/freeradius-server -X
FreeRADIUS Version 3.0.21
Copyright (C) 1999-2019 The FreeRADIUS server project and contributors
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE
...
Listening on auth address * port 1812 bound to server default
Listening on acct address * port 1813 bound to server default
Listening on auth address :: port 1812 bound to server default
Listening on acct address :: port 1813 bound to server default
Listening on auth address 127.0.0.1 port 18120 bound to server inner-tunnel
Listening on proxy address * port 38640
Listening on proxy address :: port 49445
Ready to process requests
^C$
(note hit Ctrl-C to quit).
The alternative is to start it in the background, but instead of running FreeRADIUS run some other process. You can then exec into the container and run FreeRADIUS manually. This means you get a full shell inside the container without FreeRADIUS already running. For instance:
$ docker run --name my-radius -d freeradius/freeradius-server sleep 999999999999
23b5ddd4825a31a8fb417e1594028c6533267be4ff20a448d3844203b805dbd9
$ docker exec -it my-radius /bin/bash
root#23b5ddd4825a:/# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 23:16 ? 00:00:00 sleep 999999999999
root 7 0 0 23:17 pts/0 00:00:00 /bin/bash
root 17 7 0 23:17 pts/0 00:00:00 ps -ef
root#23b5ddd4825a:/# freeradius -X
FreeRADIUS Version 3.0.21
Copyright (C) 1999-2019 The FreeRADIUS server project and contributors
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE
...
Listening on proxy address * port 46662
Listening on proxy address :: port 40284
Ready to process requests
^Croot#23b5ddd4825a:/# exit
exit
$ docker container kill my-radius
my-radius
$ docker container rm my-radius
my-radius
The sleep command used here will obviously quit at some point, so use a number large enough that it runs for long enough, as when that process exits the container will shut down.

Where exactly do the logs of kubernetes pods come from (at the container level)?

I'm looking to redirect some logs from a command run with kubectl exec to that pod's logs, so that they can be read with kubectl logs <pod-name> (or really, /var/log/containers/<pod-name>.log). I can see the logs I need as output when running the command, and they're stored inside a separate log directory inside the running container.
Redirecting the output (i.e. >> logfile.log) to the file which I thought was mirroring what is in kubectl logs <pod-name> does not update that container's logs, and neither does redirecting to stdout.
When calling kubectl logs <pod-name>, my understanding is that kubelet gets them from it's internal /var/log/containers/ directory. But what determines which logs are stored there? Is it the same process as the way logs get stored inside any other docker container?
Is there a way to examine/trace the logging process, or determine where these logs are coming from?
Logs from the STDOUT and STDERR of containers in the pod are captured and stored inside files in /var/log/containers. This is what is presented when kubectl log is run.
In order to understand why output from commands run by kubectl exec is not shown when running kubectl log, let's have a look how it all works with an example:
First launch a pod running ubuntu that are sleeping forever:
$> kubectl run test --image=ubuntu --restart=Never -- sleep infinity
Exec into it
$> kubectl exec -it test bash
Seen from inside the container it is the STDOUT and STDERR of PID 1 that are being captured. When you do a kubectl exec into the container a new process is created living alongside PID 1:
root#test:/# ps -auxf
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 7 0.0 0.0 18504 3400 pts/0 Ss 20:04 0:00 bash
root 19 0.0 0.0 34396 2908 pts/0 R+ 20:07 0:00 \_ ps -auxf
root 1 0.0 0.0 4528 836 ? Ss 20:03 0:00 sleep infinity
Redirecting to STDOUT is not working because /dev/stdout is a symlink to the process accessing it (/proc/self/fd/1 rather than /proc/1/fd/1).
root#test:/# ls -lrt /dev/stdout
lrwxrwxrwx 1 root root 15 Nov 5 20:03 /dev/stdout -> /proc/self/fd/1
In order to see the logs from commands run with kubectl exec the logs need to be redirected to the streams that are captured by the kubelet (STDOUT and STDERR of pid 1). This can be done by redirecting output to /proc/1/fd/1.
root#test:/# echo "Hello" > /proc/1/fd/1
Exiting the interactive shell and checking the logs using kubectl logs should now show the output
$> kubectl logs test
Hello

Cannot conect to Docker container running in VSTS

I have a test which starts a Docker container, performs the verification (which is talking to the Apache httpd in the Docker container), and then stops the Docker container.
When I run this test locally, this test runs just fine. But when it runs on hosted VSTS, thus a hosted build agent, it cannot connect to the Apache httpd in the Docker container.
This is the .vsts-ci.yml file:
queue: Hosted Linux Preview
steps:
- script: |
./test.sh
This is the test.sh shell script to reproduce the problem:
#!/bin/bash
set -e
set -o pipefail
function tearDown {
docker stop test-apache
docker rm test-apache
}
trap tearDown EXIT
docker run -d --name test-apache -p 8083:80 httpd
sleep 10
curl -D - http://localhost:8083/
When I run this test locally, the output that I get is:
$ ./test.sh
469d50447ebc01775d94e8bed65b8310f4d9c7689ad41b2da8111fd57f27cb38
HTTP/1.1 200 OK
Date: Tue, 04 Sep 2018 12:00:17 GMT
Server: Apache/2.4.34 (Unix)
Last-Modified: Mon, 11 Jun 2007 18:53:14 GMT
ETag: "2d-432a5e4a73a80"
Accept-Ranges: bytes
Content-Length: 45
Content-Type: text/html
<html><body><h1>It works!</h1></body></html>
test-apache
test-apache
This output is exactly as I expect.
But when I run this test on VSTS, the output that I get is (irrelevant parts replaced with …).
2018-09-04T12:01:23.7909911Z ##[section]Starting: CmdLine
2018-09-04T12:01:23.8044456Z ==============================================================================
2018-09-04T12:01:23.8061703Z Task : Command Line
2018-09-04T12:01:23.8077837Z Description : Run a command line script using cmd.exe on Windows and bash on macOS and Linux.
2018-09-04T12:01:23.8095370Z Version : 2.136.0
2018-09-04T12:01:23.8111699Z Author : Microsoft Corporation
2018-09-04T12:01:23.8128664Z Help : [More Information](https://go.microsoft.com/fwlink/?LinkID=613735)
2018-09-04T12:01:23.8146694Z ==============================================================================
2018-09-04T12:01:26.3345330Z Generating script.
2018-09-04T12:01:26.3392080Z Script contents:
2018-09-04T12:01:26.3409635Z ./test.sh
2018-09-04T12:01:26.3574923Z [command]/bin/bash --noprofile --norc /home/vsts/work/_temp/02476800-8a7e-4e22-8715-c3f706e3679f.sh
2018-09-04T12:01:27.7054918Z Unable to find image 'httpd:latest' locally
2018-09-04T12:01:30.5555851Z latest: Pulling from library/httpd
2018-09-04T12:01:31.4312351Z d660b1f15b9b: Pulling fs layer
[…]
2018-09-04T12:01:49.1468474Z e86a7f31d4e7506d34e3b854c2a55646eaa4dcc731edc711af2cc934c44da2f9
2018-09-04T12:02:00.2563446Z % Total % Received % Xferd Average Speed Time Time Time Current
2018-09-04T12:02:00.2583211Z Dload Upload Total Spent Left Speed
2018-09-04T12:02:00.2595905Z
2018-09-04T12:02:00.2613320Z 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to localhost port 8083: Connection refused
2018-09-04T12:02:00.7027822Z test-apache
2018-09-04T12:02:00.7642313Z test-apache
2018-09-04T12:02:00.7826541Z ##[error]Bash exited with code '7'.
2018-09-04T12:02:00.7989841Z ##[section]Finishing: CmdLine
The key thing is this:
curl: (7) Failed to connect to localhost port 8083: Connection refused
10 seconds should be enough for apache to start.
Why can curl not communicate with Apache on its port 8083?
P.S.:
I know that a hard-coded port like this is rubbish and that I should use an ephemeral port instead. I wanted to get it running first wirth a hard-coded port, because that's simpler than using an ephemeral port, and then switch to an ephemeral port as soon as the hard-coded port works. And in case the hard-coded port doesn't work because the port is unavailable, the error should look different, in that case, docker run should fail because the port can't be allocated.
Update:
Just to be sure, I've rerun the test with sleep 100 instead of sleep 10. The results are unchanged, curl cannot connect to localhost port 8083.
Update 2:
When extending the script to execute docker logs, docker logs shows that Apache is running as expected.
When extending the script to execute docker ps, it shows the following output:
2018-09-05T00:02:24.1310783Z CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2018-09-05T00:02:24.1336263Z 3f59aa014216 httpd "httpd-foreground" About a minute ago Up About a minute 0.0.0.0:8083->80/tcp test-apache
2018-09-05T00:02:24.1357782Z 850bda64f847 microsoft/vsts-agent:ubuntu-16.04-docker-17.12.0-ce-standard "/home/vsts/agents/2…" 2 minutes ago Up 2 minutes musing_booth
The problem is that the VSTS build agent runs in a Docker container. When the Docker container for Apache is started, it runs on the same level as the VSTS build agent Docker container, not nested inside the VSTS build agent Docker container.
There are two possible solutions:
Replacing localhost with the ip address of the docker host, keeping the port number 8083
Replacing localhost with the ip address of the docker container, changing the host port number 8083 to the container port number 80.
Access via the Docker Host
In this case, the solution is to replace localhost with the ip address of the docker host. The following shell snippet can do that:
host=localhost
if grep '^1:name=systemd:/docker/' /proc/1/cgroup
then
apt-get update
apt-get install net-tools
host=$(route -n | grep '^0.0.0.0' | sed -e 's/^0.0.0.0\s*//' -e 's/ .*//')
fi
curl -D - http://$host:8083/
The if grep '^1:name=systemd:/docker/' /proc/1/cgroup inspects whether the script is running inside a Docker container. If so, it installs net-tools to get access to the route command, and then parses the default gw from the route command to get the ip address of the host. Note that this only works if the container's network default gw actually is the host.
Direct Access to the Docker Container
After launching the docker container, its ip addresses can be obtained with the following command:
docker container inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}}' <container-id>
Replace <container-id> with your container id or name.
So, in this case, it would be (assuming that the first ip address is okay):
ips=($(docker container inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}}' nuance-apache))
host=${ips[0]}
curl http://$host/

docker - driver "devicemapper" failed to remove root filesystem after process in container killed

I am using Docker version 17.06.0-ce on Redhat with devicemapper storage. I am launching a container running a long-running service. The master process inside the container sometimes dies for whatever reason. I get the following error message.
/bin/bash: line 1: 40 Killed python -u scripts/server.py start go
I would like the container to exit and to be restarted by docker. However docker never exits. If I do it manually I get the following error:
Error response from daemon: driver "devicemapper" failed to remove root filesystem.
After googling, I tried a bunch of things:
docker rm -f <container>
rm -f <pth to mount>
umount <pth to mount>
All result in device is busy. The only remedy right now is to reboot the host system which is obviously not a long-term solution.
Any ideas?
I had the same problem and the solution was a real surprise.
So here is the error om docker rm:
$ docker rm 08d51aad0e74
Error response from daemon: driver "devicemapper" failed to remove root filesystem for 08d51aad0e74060f54bba36268386fe991eff74570e7ee29b7c4d74047d809aa: remove /var/lib/docker/devicemapper/mnt/670cdbd30a3627ae4801044d32a423284b540c5057002dd010186c69b6cc7eea: device or resource busy
Then I did the following (basically go through all processes and look for docker in mountinfo):
$ grep docker /proc/*/mountinfo | grep 958722d105f8586978361409c9d70aff17c0af3a1970cb3c2fb7908fe5a310ac
/proc/20416/mountinfo:629 574 253:15 / /var/lib/docker/devicemapper/mnt/958722d105f8586978361409c9d70aff17c0af3a1970cb3c2fb7908fe5a310ac rw,relatime shared:288 - xfs /dev/mapper/docker-253:5-786536-958722d105f8586978361409c9d70aff17c0af3a1970cb3c2fb7908fe5a310ac rw,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota
This got be the PID of the offending process keeping it busy - 20416 (the item after /proc/)
So I did a ps -p and to my surprise find:
[devops#dp01app5030 SeGrid]$ ps -p 20416
PID TTY TIME CMD
20416 ? 00:00:19 ntpd
A true WTF moment. So I pair problem solved with Google and found this:
Then found this https://github.com/docker/for-linux/issues/124
Turns out I had to restart ntp daemon and that fixed the issue!!!

Map process id of application on docker container with process id on host

i am running application in docker container only and not on host machine.. Application has some process ID on docker container. That application also has process id on host . Process Id on host and process ID on container are differerent. How can I see process ID of application running on docker container from host ? How can I map the process ID of application running on container only (and not on host ) with process ID of this application on host ? I searched on internet , but could not find correct set of commands
Running a command like this should get you the PID of the container's main process (ID 1) on the host.
docker container top
$ docker container top cf1b
UID PID PPID C STIME TTY TIME CMD
root 3289 3264 0 Aug24 pts/0 00:00:00 bash
root 9989 9963 99 Aug24 ? 6-07:24:43 java -javaagent:/apps/docker-custom/newrelic/newrelic.jar -Xmx4096m -Xms4096m -XX:+UseG1GC -XX:+UseStringDeduplication -XX:-TieredCompilation -XX:+ParallelRefProcEnabled -jar /apps/service/app.jar
So in this case PID 1 in my container maps to ID 9989 on the host.
If a process is indeed ONLY in your container, that becomes more chellenging. It You can use tools like nsenter to peek into the name spaces but if you have exec privelages to your container then that would achieve the same thing, but the docker container top command on the host combined with the ps command within the container can give you an idea of what is happening.
If you can clarify what your end goal is, we might be able to provide more clear guidance.
In order to get the mapping between container process ID and host process ID, one could run ps -ef on container and docker top <container> on the host. The CMD column present in both of these outputs will help in the decision. Below is the sample output in my environment:
container1:/$ ps -ef
UID PID PPID C STIME TTY TIME CMD
2033 10 0 0 11:08 pts/0 00:00:00 postgres -c config_file=/etc/postgresql/postgresql_primary.conf
host1# docker top warehouse_db
UID PID PPID C STIME TTY TIME CMD
bbharati 11677 11660 0 11:08 pts/0 00:00:00 postgres -c config_file=/etc/postgresql/postgresql_primary.conf
As we can see, the container process with PID=10 maps to the host process with PID=11677

Resources