uwsgi in docker in vagrant in vmware - socket not being created - docker

On my mac I am running Windows Server 2016 in VMware. In there I am running Ubuntu in vagrant/Virtual Box. In there I am trying to run a django app in a docker container with nginx/uwsgi.
uwsgi is failing to start with:
[uWSGI] getting INI configuration from /opt/django/CAPgraph/uwsgi.ini
*** Starting uWSGI 2.0.15 (64bit) on [Thu Aug 17 20:01:23 2017] ***
compiled with version: 6.4.0 20170805 on 17 August 2017 06:10:50
os: Linux-3.13.0-128-generic #177-Ubuntu SMP Tue Aug 8 11:40:23 UTC 2017
nodename: 37db4344b5ae
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 1
current working directory: /
detected binary path: /usr/local/bin/uwsgi
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
chdir() to /opt/django/CAPgraph/
your memory page size is 4096 bytes
detected max file descriptor number: 524288
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
bind(): Operation not permitted [core/socket.c line 230]
In VMware the folder is set for sharing for everyone with write permission. That is mounted in the vagrant VM where it is 777, and in the docker container is it also 777. I can create files in the dir from all 3 places. But it seems uwsgi cannot create the socket.
I tried a short python script as a test from vagrant and that could not create a socket either:
vagrant#vagrant-ubuntu-trusty-64:/vagrant$ python -c "import socket as s; sock = s.socket(s.AF_UNIX); sock.bind('/vagrant/app.sock')"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/lib/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 1] Operation not permitted
Anyone know how I can resolve this?
UPDATE: I changed the dir to /tmp where I can create a socket with my python script, but still uwsgi fails with the same error.
UPDATE 2: I created the socket in /tmp with my python script, chmod-ed it to 777 and still I get the same error from uwsgi.

Use any other folder other than /vagrant. I usually use /home/vagrant
The problem is that /vagrant is not same as a normal folder. If you execute the command mount | grep vagrant you will find it uses a vboxfs file system, which for some reason doesn't gel well with docker

Related

jmxterm: "Unable to create a system terminal" inside Docker container

I have a Docker image which contains JRE, some Java web application and jmxterm. The latter is used for running some ad-hoc administrative tasks. The image is used on the CentOS 7 server with Docker 1.13 (which is pretty old but is the latest version which is supplied via the distro's repository) to run the web application itself.
All works well, but after updating jmxterm from 1.0.0 to the latest version (1.0.2), I get the following warning when entering the running container and starting jmxterm:
WARNING: Unable to create a system terminal, creating a dumb terminal (enable debug logging for more information)
After this, jmxterm does not react to arrow keys (when trying to navigate through the command history), nor does it provide autocompletion.
Some quick investigation shows that the problem may be reproduced in the clean environment with CentOS 7. Say, this is how I could bootstrap the system and the container with all stuff I need:
$ vagrant init centos/7
$ vagrant up
$ vagrant ssh
[vagrant#localhost ~]$ sudo yum install docker
[vagrant#localhost ~]$ sudo systemctl start docker
[vagrant#localhost ~]$ sudo docker run -it --entrypoint bash openjdk:11
root#0c4c614de0ee:/# wget https://github.com/jiaqi/jmxterm/releases/download/v1.0.2/jmxterm-1.0.2-uber.jar
And this is how I enter the container and run jmxterm:
[vagrant#localhost ~]$ sudo docker exec -it 0c4c614de0ee sh
root#0c4c614de0ee:/# java -jar jmxterm-1.0.2-uber.jar
WARNING: Unable to create a system terminal, creating a dumb terminal (enable debug logging for more information)
root#0c4c614de0ee:/# bea<TAB>
<Nothing happens, but autocompletion had to appear>
Few observations:
the problem does not appear with older jmxterm no matter which image do I use;
the problem arises with new jmxterm no matter which image do I use;
the problem is not reproducible on my laptop (which has newer kernel and Docker);
the problem is not reproducible if I use latest Docker (from the external repo) on the CentOS 7 server instead of CentOS 7's native version 1.13.
What happens, and why the error is reproducible only in specific environments? Is there any workaround for this?
TLDR: running new jmxterm versions as java -jar jmxterm-1.0.2-uber.jar < /dev/tty is a quick, dirty and hacky workaround for having the autocompletion and other stuff work inside the interactive container session.
A quick check shows that jmxterm tries to determine the terminal device used by the process — probably to obtain the terminal capabilities later — by running the tty utility:
root#0c4c614de0ee:/# strace -f -e 'trace=execve,wait4' java -jar jmxterm-1.0.2-uber.jar
execve("/opt/java/openjdk/bin/java", ["java", "-jar", "jmxterm-1.0.2-uber.jar"], 0x7ffed3a53210 /* 36 vars */) = 0
...
[pid 432] execve("/usr/bin/tty", ["tty"], 0x7fff8ea39608 /* 36 vars */) = 0
[pid 433] wait4(432, [{WIFEXITED(s) && WEXITSTATUS(s) == 1}], 0, NULL) = 432
WARNING: Unable to create a system terminal, creating a dumb terminal (enable debug logging for more information)
The utility fails with the status of 1, which is likely the reason for the error message. Why?
root#0c4c614de0ee:/# strace -y tty
...
readlink("/proc/self/fd/0", "/dev/pts/3", 4095) = 10
stat("/dev/pts/3", 0x7ffe966f2160) = -1 ENOENT (No such file or directory)
...
write(1</dev/pts/3>, "not a tty\n", 10not a tty
) = 10
The utility says "not a tty" while we definitely have one. A quick check shows that... There is really no PTY device in the container though the standard streams of the shell are connected to one!
root#0c4c614de0ee:/# ls -l /proc/self/fd
total 0
lrwx------. 1 root root 64 Jun 3 21:26 0 -> /dev/pts/3
lrwx------. 1 root root 64 Jun 3 21:26 1 -> /dev/pts/3
lrwx------. 1 root root 64 Jun 3 21:26 2 -> /dev/pts/3
lr-x------. 1 root root 64 Jun 3 21:26 3 -> /proc/61/fd
root#0c4c614de0ee:/# ls -l /dev/pts
total 0
crw-rw-rw-. 1 root root 5, 2 Jun 3 21:26 ptmx
What if we check the same with latest Docker?
root#c0ebd608f79a:/# ls -l /proc/self/fd
total 0
lrwx------ 1 root root 64 Jun 3 21:45 0 -> /dev/pts/1
lrwx------ 1 root root 64 Jun 3 21:45 1 -> /dev/pts/1
lrwx------ 1 root root 64 Jun 3 21:45 2 -> /dev/pts/1
lr-x------ 1 root root 64 Jun 3 21:45 3 -> /proc/16/fd
root#c0ebd608f79a:/# ls -l /dev/pts
total 0
crw--w---- 1 root tty 136, 0 Jun 3 21:44 0
crw--w---- 1 root tty 136, 1 Jun 3 21:45 1
crw-rw-rw- 1 root root 5, 2 Jun 3 21:45 ptmx
Bingo! Now we have our PTYs where they should be, so jmxterm works well with latest Docker.
It seems pretty weird that with older Docker the processes are connected to some PTYs while there are no devices for them in /dev/pts, but tracing the Docker process explains why this happens. Older Docker allocates the PTY for the container before setting other things up (including entering the new mount namespace and mounting devpts into it or just entering the mount namespace in case of docker exec -it):
[vagrant#localhost ~]$ sudo strace -p $(pidof docker-containerd-current) -f -e trace='execve,mount,unshare,openat,ioctl'
...
[pid 3885] openat(AT_FDCWD, "/dev/ptmx", O_RDWR|O_NOCTTY|O_CLOEXEC) = 9
[pid 3885] ioctl(9, TIOCGPTN, [1]) = 0
[pid 3885] ioctl(9, TIOCSPTLCK, [0]) = 0
...
[pid 3898] unshare(CLONE_NEWNS|CLONE_NEWUTS|CLONE_NEWIPC|CLONE_NEWNET|CLONE_NEWPID) = 0
...
[pid 3899] mount("devpts", "/var/lib/docker/overlay2/3af250a9f118d637bfba5701f5b0dfc09ed154c6f9d0240ae12523bf252e350c/merged/dev/pts", "devpts", MS_NOSUID|MS_NOEXEC, "newinstance,ptmxmode=0666,mode=0"...) = 0
...
[pid 3899] execve("/bin/bash", ["bash"], 0xc4201626c0 /* 7 vars */ <unfinished ...>
Note the newinstance mount option which ensures that the devpts mount owns its PTYs exclusively and does not share them with other mounts. This leads to the interesting effect: the PTY device for the container stays on the host and belongs to the host's devpts mount, while the containerized process still has access to it, as it obtained the already-open file descriptors just in the beginning of its life!
The latest Docker first mounts devpts for the container and then allocates the PTY, so the PTY belongs to container's devpts mount and is visible inside the container's filesystem:
$ sudo strace -p $(pidof containerd) -f -e trace='execve,mount,unshare,openat,ioctl'
...
[pid 14043] unshare(CLONE_NEWNS|CLONE_NEWUTS|CLONE_NEWIPC|CLONE_NEWPID|CLONE_NEWNET) = 0
...
[pid 14044] mount("devpts", "/var/lib/docker/overlay2/b743cf16ab954b9a4b4005bca0aeaa019c4836c7d58d6073044e5b48446c3d62/merged/dev/pts", "devpts",
MS_NOSUID|MS_NOEXEC, "newinstance,ptmxmode=0666,mode=0"...) = 0
...
[pid 14044] openat(AT_FDCWD, "/dev/ptmx", O_RDWR|O_NOCTTY|O_CLOEXEC) = 7
[pid 14044] ioctl(7, TIOCGPTN, [0]) = 0
[pid 14044] ioctl(7, TIOCSPTLCK, [0]) = 0
...
[pid 14044] execve("/bin/bash", ["/bin/bash"], 0xc000203530 /* 4 vars */ <unfinished ...>
Well, the problem is caused by inappropriate Docker behavior, but how comes that older jmxterm worked well in the same environment? Let's check (note, that Java 8 image is used here, as older jmxterm does not play well with Java 11):
root#504a7757e310:/# wget https://github.com/jiaqi/jmxterm/releases/download/v1.0.0/jmxterm-1.0.0-uber.jar
root#504a7757e310:/# strace -f -e 'trace=execve,wait4' java -jar jmxterm-1.0.0-uber.jar
execve("/usr/local/openjdk-8/bin/java", ["java", "-jar", "jmxterm-1.0.0-uber.jar"], 0x7fffdcaebdd0 /* 10 vars */) = 0
...
[pid 310] execve("/bin/sh", ["sh", "-c", "stty -a < /dev/tty"], 0x7fff1f2a1cc8 /* 10 vars */) = 0
So, older jmxterm just uses /dev/tty instead of asking tty for the device name, and this works, as this device is present inside the container:
root#504a7757e310:/# ls -l /dev/tty
crw-rw-rw-. 1 root root 5, 0 Jun 3 21:36 /dev/tty
The huge difference between these versions of jmxterm is that newer tool version uses higher major version of jline, which is the library responsible for interaction with the terminal (akin to the readline in the C world). The difference between major jline versions leads to the difference in jmxterm's behavior, and current versions just rely on tty.
This observation leads us to the quick and dirty workaround which does not require neither updating Docker nor patching the jline/jmxterm tandem: we may just attach jmxterm's stdin to /dev/tty forcibly and thus make jline use this device (which is now referenced by /proc/self/fd/0) instead of the /dev/pts entry (which, formally, is not always correct, but is still enough for ad-hoc use):
root#0c4c614de0ee:/# java -jar jmxterm-1.0.2-uber.jar < /dev/tty
Welcome to JMX terminal. Type "help" for available commands.
$>bea<TAB>
bean beans
Now we have the autocompletion, history and other cool things we need to have.
If you are trying to run an interactive application (that needs tty) inside a docker container or a pod in kubernetes, then the following should work.
For docker-compose use:
image: image-name:2.0
container_name: container-name
restart: always
stdin_open: true
tty: true
For kubernetes use:
spec:
containers:
- name: web
image: web:latest
tty: true
stdin: true

Pihole deployment restarting with helm

I'm trying to install pihole on a Kubernetes cluster on Docker via helm. I'm following this guide to do so. Everything seems to go smoothly. I get a completion:
NAME: pihole
LAST DEPLOYED: Wed Sep 30 22:22:15 2020
NAMESPACE: pihole
STATUS: deployed
REVISION: 1
TEST SUITE: None
But the pihole never reaches the ready state, it just restarts after a couple minutes. Upon inspecting the pod I see:
lastState:
terminated:
containerID: docker://16e2a318b460d4d5aebd502175fb688fc150993940181827a506c086e2cb326a
exitCode: 0
finishedAt: "2020-09-30T22:01:55Z"
reason: Completed
startedAt: "2020-09-30T21:59:17Z"
How do I prevent this from continually restarting once it's complete?
Here is the output of kubectl logs <POD_NAME>:
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] 01-resolver-resolv: applying...
[fix-attrs.d] 01-resolver-resolv: exited 0.
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 20-start.sh: executing...
::: Starting docker specific checks & setup for docker pihole/pihole
[✓] Update local cache of available packages
[i] Existing PHP installation detected : PHP version 7.0.33-0+deb9u8
[i] Installing configs from /etc/.pihole...
[i] Existing dnsmasq.conf found... it is not a Pi-hole file, leaving alone!
[✓] Copying 01-pihole.conf to /etc/dnsmasq.d/01-pihole.conf
chown: cannot access '': No such file or directory
chmod: cannot access '': No such file or directory
chown: cannot access '/etc/pihole/dhcp.leases': No such file or directory
::: Pre existing WEBPASSWORD found
Using default DNS servers: 8.8.8.8 & 8.8.4.4
DNSMasq binding to default interface: eth0
Added ENV to php:
"PHP_ERROR_LOG" => "/var/log/lighttpd/error.log",
"ServerIP" => "0.0.0.0",
"VIRTUAL_HOST" => "pi.hole",
Using IPv4 and IPv6
::: Preexisting ad list /etc/pihole/adlists.list detected ((exiting setup_blocklists early))
https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts
https://mirror1.malwaredomains.com/files/justdomains
::: Testing pihole-FTL DNS: FTL started!
::: Testing lighttpd config: Syntax OK
::: All config checks passed, cleared for startup ...
::: Docker start setup complete
[✗] DNS resolution is currently unavailable
You are not alone with this issue.
Resolution is here - chown: cannot access '/etc/pihole/dhcp.leases': No such file or directory
This happens for me as well. I used that same tutorial to set up my
cluster. If you are using a persistent volume as well, use a ssh
connection to get to your drive and run these two commands.
ls -l ----> this will show the owner and user of each file they all should be www-data if not run this cmd
sudo chown -R www-data:www-data pihole from the /mnt/ssd directory described in the tutorial. This will allow you to add more whitelists/blacklists/adlists from the web portal.

Can I run k8s master INSIDE a docker container? Getting errors about k8s looking for host's kernel details

In a docker container I want to run k8s.
When I run kubeadm join ... or kubeadm init commands I see sometimes errors like
\"modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could
not open moddep file
'/lib/modules/3.10.0-1062.1.2.el7.x86_64/modules.dep.bin'.
nmodprobe:
FATAL: Module configs not found in directory
/lib/modules/3.10.0-1062.1.2.el7.x86_64",
err: exit status 1
because (I think) my container does not have the expected kernel header files.
I realise that the container reports its kernel based on the host that is running the container; and looking at k8s code I see
// getKernelConfigReader search kernel config file in a predefined list. Once the kernel config
// file is found it will read the configurations into a byte buffer and return. If the kernel
// config file is not found, it will try to load kernel config module and retry again.
func (k *KernelValidator) getKernelConfigReader() (io.Reader, error) {
possibePaths := []string{
"/proc/config.gz",
"/boot/config-" + k.kernelRelease,
"/usr/src/linux-" + k.kernelRelease + "/.config",
"/usr/src/linux/.config",
}
so I am bit confused what is simplest way to run k8s inside a container such that it consistently past this getting the kernel info.
I note that running docker run -it solita/centos-systemd:7 /bin/bash on a macOS host I see :
# uname -r
4.9.184-linuxkit
# ls -l /proc/config.gz
-r--r--r-- 1 root root 23834 Nov 20 16:40 /proc/config.gz
but running exact same on a Ubuntu VM I see :
# uname -r
4.4.0-142-generic
# ls -l /proc/config.gz
ls: cannot access /proc/config.gz
[Weirdly I don't see this FATAL: Module configs not found in directory error every time, but I guess that is a separate question!]
UPDATE 22/November/2019. I see now that k8s DOES run okay in a container. Real problem was weird/misleading logs. I have added an answer to clarify.
I do not believe that is possible given the nature of containers.
You should instead test your app in a docker container then deploy that image to k8s either in the cloud or locally using minikube.
Another solution is to run it under kind which uses docker driver instead of VirtualBox
https://kind.sigs.k8s.io/docs/user/quick-start/
It seems the FATAL error part was a bit misleading.
It was badly formatted by my test environment (all on one line.
When k8s was failing I saw the FATAL and assumed (incorrectly) that was root cause.
When I format the logs nicely I see ...
kubeadm join 172.17.0.2:6443 --token 21e8ab.1e1666a25fd37338 --discovery-token-unsafe-skip-ca-verification --experimental-control-plane --ignore-preflight-errors=all --node-name 172.17.0.3
[preflight] Running pre-flight checks
[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.4.0-142-generic
DOCKER_VERSION: 18.09.3
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.3. Latest validated version: 18.06
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.4.0-142-generic/modules.dep.bin'\nmodprobe: FATAL: Module configs not found in directory /lib/modules/4.4.0-142-generic\n", err: exit status 1
[discovery] Trying to connect to API Server "172.17.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.2:6443"
[discovery] Failed to request cluster info, will try again: [the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps cluster-info)]
There are other errors later, which I originally though were a side-effect of the nasty looking FATAL error e.g. .... "[util/etcd] Attempt timed out"]} but I now think root cause is Etcd part times out sometimes.
Adding this answer in case someone else puzzled like I was.

DashDB Local Docker Deployment

I have been able to deploy DashDB local (SMP) locally on my mac (using Kite) 3-4 months ago, but recently I am not able to successfully deploy either SMP or MPP using MacOS (Kite) or Linux (on AWS using individual instances with docker running - not swarm).
Linux flavor (Default Amazon Linux AMI)
[ec2-user#ip-10-0-0-171 ~]$ cat /etc/*-release
NAME="Amazon Linux AMI"
VERSION="2016.03"
ID="amzn"
ID_LIKE="rhel fedora"
VERSION_ID="2016.03"
PRETTY_NAME="Amazon Linux AMI 2016.03"
ANSI_COLOR="0;33"
CPE_NAME="cpe:/o:amazon:linux:2016.03:ga"
HOME_URL="http://aws.amazon.com/amazon-linux-ami/"
Amazon Linux AMI release 2016.03
Linux Kernel
[ec2-user#ip-10-0-0-171 ~]$ sudo uname -r
4.4.11-23.53.amzn1.x86_64
Docker Version
[ec2-user#ip-10-0-0-171 ~]$ docker --version
Docker version 1.11.2, build b9f10c9/1.11.2
hostname
[ec2-user#ip-10-0-0-171 ~]$ hostname
ip-10-0-0-171
dnsdomainname
[ec2-user#ip-10-0-0-171 ~]$ dnsdomainname
ec2.internal
In every variant approach I always end up with something like the message below after running:
docker run -d -it --privileged=true --net=host --name=dashDB -v /mnt/clusterfs:/mnt/bludata0 -v /mnt/clusterfs:/mnt/blumeta0 ibmdashdb/preview:latest
(for SMP) or docker exec -it dashDB start (after the run command for MPP). I tried using the getlogs, but couldn't find anything interesting. Any ideas? For SMP I am using a created directory on single host, for MPP I am using AWS' EFS for a shared NFS mount.
[ec2-user#ip-10-0-0-171 ~]$ docker logs --follow dashDB
/mnt/bludata0/nodes cannot be found. We will continue with a single-node deployment.
Checking if dashDB initialize has been done previously ...
dashDB stack is NOT initialized yet.
#####################################################################
Running dashDB prerequisite checks on node: ip-10-0-0-171
#####################################################################
#####################################################################
Prerequisite check -- Minimum Memory requirement
#####################################################################
* Memory check: PASS
#####################################################################
Prerequisite check -- Minimum data volume free-space requirement
#####################################################################
* Free space in data volume check: PASS
#####################################################################
Prerequisite check -- Minimum number of CPU/CPU core requirement
#####################################################################
* CPU check: PASS
#####################################################################
Prerequisite check -- Data Volume device DIO requirement
#####################################################################
* DIO check: PASS
#####################################################################
Prerequisite check -- Data Volume device I/O stats
#####################################################################
Testing WRITE I/O performance of the data volume device
32768+0 records in
32768+0 records out
134217728 bytes (134 MB) copied, 33.7435 s, 4.0 MB/s
real 0m33.746s
user 0m0.008s
sys 0m12.040s
Testing READ I/O performance of the data volume device
32768+0 records in
32768+0 records out
134217728 bytes (134 MB) copied, 10.8286 s, 12.4 MB/s
real 0m10.831s
user 0m0.116s
sys 0m0.344s
######################################################################
*************************************************
Prerequisite check summary for Node: ip-10-0-0-171
*************************************************
* Memory check: PASS
* Free space in data volume check: PASS
* CPU check: PASS
* DIO check: PASS
*********************************************
I/O perf test summary for Node: ip-10-0-0-171
*********************************************
* Read throughput: 12.4 MB/s
* Write throughput: 4.0 MB/s
######################################################################
Creating dashDB directories and dashDB instance
Starting few of the key services ...
Generating /etc/rndc.key: [ OK ]
Starting named: [ OK ]
Starting saslauthd: [ OK ]
Starting sendmail: [ OK ]
Starting sm-client: [ OK ]
Setting dsserver Config
Setting openldap
Starting slapd: [ OK ]
Starting sssd: [ OK ]
Starting system logger: [ OK ]
Starting nscd: [ OK ]
Update dsserver with ldap info
dashDB set configuration
Setting database configuration
database SSL configuration
-bludb_ssl_keystore_password
-bludb_ssl_certificate_label
UPDATED: /opt/ibm/dsserver/Config/dswebserver.properties
set dashDB Encryption
Setting up keystore
dashDB failed to stop on ip-10-0-0-171 because database services didn't stop.
Retry the operation. If the same failure occurs, contact IBM Service.
If a command prompt is not visible on your screen, you need to detach from the container by typing Ctrl-C.
Stop/Start
[ec2-user#ip-10-0-0-171 ~]$ docker exec -it dashDB stop
Attempt to shutdown services on node ip-10-0-0-171 ...
dsserver_home: /opt/ibm/dsserver
port: -1
https.port: 8443
status.port: 11082
SERVER STATUS: INACTIVE
httpd: no process killed
Instance is already in stopped state due to which database consistency can't be checked
###############################################################################
Successfully stopped dashDB
###############################################################################
[ec2-user#ip-10-0-0-171 ~]$ docker stop dashDB
dashDB
[ec2-user#ip-10-0-0-171 ~]$ docker start dashDB
dashDB
[ec2-user#ip-10-0-0-171 ~]$ docker logs --follow dashDB
Follow the logs again
[ec2-user#ip-10-0-0-171 ~]$ docker logs --follow dashDB
....SAME INFO FROM BEFORE...
/mnt/bludata0/nodes cannot be found. We will continue with a single-node deployment.
Checking if dashDB initialize has been done previously ...
dashDB stack is NOT initialized yet.
#####################################################################
Running dashDB prerequisite checks on node: ip-10-0-0-171
#####################################################################
#####################################################################
Prerequisite check -- Minimum Memory requirement
#####################################################################
* Memory check: PASS
#####################################################################
Prerequisite check -- Minimum data volume free-space requirement
#####################################################################
* Free space in data volume check: PASS
#####################################################################
Prerequisite check -- Minimum number of CPU/CPU core requirement
#####################################################################
* CPU check: PASS
#####################################################################
Prerequisite check -- Data Volume device DIO requirement
#####################################################################
* DIO check: PASS
#####################################################################
Prerequisite check -- Data Volume device I/O stats
#####################################################################
Testing WRITE I/O performance of the data volume device
32768+0 records in
32768+0 records out
134217728 bytes (134 MB) copied, 34.5297 s, 3.9 MB/s
real 0m34.532s
user 0m0.020s
sys 0m11.988s
Testing READ I/O performance of the data volume device
32768+0 records in
32768+0 records out
134217728 bytes (134 MB) copied, 10.8309 s, 12.4 MB/s
real 0m10.833s
user 0m0.000s
sys 0m0.432s
######################################################################
*************************************************
Prerequisite check summary for Node: ip-10-0-0-171
*************************************************
* Memory check: PASS
* Free space in data volume check: PASS
* CPU check: PASS
* DIO check: PASS
*********************************************
I/O perf test summary for Node: ip-10-0-0-171
*********************************************
* Read throughput: 12.4 MB/s
* Write throughput: 3.9 MB/s
######################################################################
Creating dashDB directories and dashDB instance
mv: cannot stat `/tmp/bashrc_db2inst1': No such file or directory
mv: cannot stat `/tmp/bash_profile_db2inst1': No such file or directory
Starting few of the key services ...
Starting named: [ OK ]
Starting saslauthd: [ OK ]
Starting sendmail: [ OK ]
Setting dsserver Config
mv: cannot stat `/tmp/dswebserver.properties': No such file or directory
Setting openldap
/bin/sh: /tmp/ldap-directories.sh: No such file or directory
cp: cannot stat `/tmp/cn=config.ldif': No such file or directory
mv: cannot stat `/tmp/olcDatabase0bdb.ldif': No such file or directory
cp: cannot stat `/tmp/slapd-sha2.so': No such file or directory
mv: cannot stat `/tmp/cn=module0.ldif': No such file or directory
ln: creating hard link `/var/run/slapd.pid': File exists [ OK ]
Starting sssd: [ OK ]
Starting system logger: [ OK ]
Starting nscd: [ OK ]
Update dsserver with ldap info
dashDB set configuration
Setting database configuration
database SSL configuration
-bludb_ssl_keystore_password
-bludb_ssl_certificate_label
UPDATED: /opt/ibm/dsserver/Config/dswebserver.properties
set dashDB Encryption
dashDB failed to stop on ip-10-0-0-171 because database services didn't stop.
Retry the operation. If the same failure occurs, contact IBM Service.
If a command prompt is not visible on your screen, you need to detach from the container by typing Ctrl-C.
Thank you for testing dashDB Local.
MPP is only supported on Linux.
SMP on Mac is only supported using Kitematic with Docker Toolbox v1.11.1b and using the 'v1.0.0-kitematic' tag image, not 'latest'.
To help you further I'd like to focus on a single environment and for simplicity let's start with SMP on Linux and we can later discuss MPP.
Check the minimum requirements for an SMP installation:
Processor 2.0 GHz core
Memory 8 GB RAM
Storage 20 GB
What is the Linux flavor you use? Check with:
cat /etc/*-release
Make sure you have at least a Linux kernel 3.10. You can check with:
$ uname -r
3.10.0-229.el7.x86_64
Then let's find out what version of docker is installed with:
$ docker --version
Docker version 1.12.1, build 23cf638
Additionally you need to configure a hostname and domain name. You can verify that you have these with:
$ hostname
and
$ dnsdomainname
Also ensure all the required ports are open, the list is long. Check our documentation.
Is this system virtual or physical?
Can you show the entire output of as well as all above checks:
$ docker logs -–follow dashDB
Try the following steps which if all else is correct may help resolve this issue. Once you see the error:
$ docker exec -it dashDB stop
$ docker stop dashDB
$ docker start dashDB

uWSGI timeout during basic test in tutorial

I feel so dumb admitting this, but I am struggling on the uWSGI tutorial for Django here
My problem is after making a test.py file as described in the tutorial, and running the command:
uwsgi --http :8000 --wsgi-file test.py
I go to port :8000 on the IP adress for my VPS and the connection times out. I have been playing around with nginx and have been able to get the "Welcome to nginx" screen to successfully show itself. The output on my terminal after starting uwsgi with the above command is:
--wsgi-file test.py
*** Starting uWSGI 1.9.17.1 (64bit) on [Thu Oct 10 20:58:40 2013] ***
compiled with version: 4.6.3 on 10 October 2013 20:17:02
os: Linux-3.9.3-x86_64-linode33 #1 SMP Mon May 20 10:22:57 EDT 2013
nodename: Name
machine: x86_64
clock source: unix
detected number of CPU cores: 8
current working directory: /usr/local/uwsgi-tutorail/mytest
detected binary path: /usr/local/bin/uwsgi
!!! no internal routing support, rebuild with pcre support !!!
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
*** WARNING: you are running uWSGI without its master process manager ***
your processes number limit is 7883
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uWSGI http bound on :8000 fd 4
spawned uWSGI http 1 (pid: 18638)
uwsgi socket 0 bound to TCP address 127.0.0.1:52306 (port auto-assigned) fd 3
Python version: 2.7.3 (default, Sep 26 2013, 20:13:52) [GCC 4.6.3]
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0x26599f0
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 72792 bytes (71 KB) for 1 cores
*** Operational MODE: single process ***
WSGI app 0 (mountpoint='') ready in 0 seconds on interpreter 0x26599f0 pid: 18637 (default app)
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI worker 1 (and the only) (pid: 18637, cores: 1)
I am a complete newb at uwsgi, any help would be greatly appreciated.
Not an elegant solution but I was able to "fix" the problem by rebuilding my VPS and starting from scratch.

Resources