beaglebone black adc "in_voltageX_raw': Resource temporarily unavailable" - beagleboneblack

I struggle to get readings from analog inputs on the bbb.
cat /sys/bus/iio/devices/iio:device0/in_voltage5_raw
cat: '/sys/bus/iio/devices/iio:device0/in_voltage5_raw': Resource temporarily unavailable
cat /etc/dogtag
BeagleBoard.org Debian Buster Console Image 2020-05-18
uname -a
Linux beaglebone 5.4.38-ti-r8 #1buster SMP PREEMPT Sat May 9 09:53:15 UTC 2020 armv7l GNU/Linux
sudo /opt/scripts/tools/version.sh
git:/opt/scripts/:[029041f6866049997bbfd2c7667b3c6e8c95201c]
eeprom:[A335BNLT000C1716BBBG0543]
model:[TI_AM335x_BeagleBone_Black]
dogtag:[BeagleBoard.org Debian Buster Console Image 2020-05-18]
bootloader:[microSD-(push-button)]:[/dev/mmcblk0]:[U-Boot 2019.04-00002-g31a8ae0206]:[location: dd MBR]
bootloader:[eMMC-(default)]:[/dev/mmcblk1]:[U-Boot 2019.04-00002-g07d5700e21]:[location: dd MBR]
UBOOT: Booted Device-Tree:[am335x-boneblack-uboot-univ.dts]
UBOOT: Loaded Overlay:[AM335X-PRU-RPROC-4-19-TI-00A0]
UBOOT: Loaded Overlay:[BB-ADC-00A0]
UBOOT: Loaded Overlay:[BB-BONE-eMMC1-01-00A0]
kernel:[5.4.38-ti-r8]
/boot/uEnv.txt Settings:
uboot_overlay_options:[enable_uboot_overlays=1]
uboot_overlay_options:[disable_uboot_overlay_video=1]
uboot_overlay_options:[disable_uboot_overlay_audio=1]
uboot_overlay_options:[disable_uboot_overlay_wireless=1]
uboot_overlay_options:[uboot_overlay_pru=/lib/firmware/AM335X-PRU-RPROC-4-19-TI-00A0.dtbo]
uboot_overlay_options:[enable_uboot_cape_universal=1]
pkg check: to individually upgrade run: [sudo apt install --only-upgrade <pkg>]
pkg:[bb-cape-overlays]:[4.14.20201119.0-0~buster+20201123]
WARNING:pkg:[bb-wl18xx-firmware]:[NOT_INSTALLED]
pkg:[kmod]:[26-1]
WARNING:pkg:[librobotcontrol]:[NOT_INSTALLED]
pkg:[firmware-ti-connectivity]:[20190717-2rcnee1~buster+20200305]
groups:[debian : debian adm kmem dialout cdrom floppy audio dip video plugdev users systemd-journal input bluetooth netdev gpio pwm eqep remoteproc admin spi iio i2c docker tisdk weston-launch xenomai cloud9ide]
cmdline:[console=ttyO0,115200n8 bone_capemgr.uboot_capemgr_enabled=1 root=/dev/mmcblk0p1 ro rootfstype=ext4 rootwait coherent_pool=1M net.ifnames=0 lpj=1990656 rng_core.default_quality=100 quiet]
dmesg | grep remote
[ 10.849708] remoteproc remoteproc0: wkup_m3 is available
[ 11.073021] remoteproc remoteproc1: 4a334000.pru is available
[ 11.074751] remoteproc remoteproc2: 4a338000.pru is available
[ 11.078998] remoteproc remoteproc0: powering up wkup_m3
[ 11.079139] remoteproc remoteproc0: Booting fw image am335x-pm-firmware.elf, size 217168
[ 11.083282] remoteproc remoteproc0: remote processor wkup_m3 is now up
dmesg | grep pru
[ 11.073021] remoteproc remoteproc1: 4a334000.pru is available
[ 11.073193] pru-rproc 4a334000.pru: PRU rproc node /ocp/interconnect#4a000000/segment#0/target-module#300000/pruss#0/pru#34000 probed successfully
[ 11.074751] remoteproc remoteproc2: 4a338000.pru is available
[ 11.074891] pru-rproc 4a338000.pru: PRU rproc node /ocp/interconnect#4a000000/segment#0/target-module#300000/pruss#0/pru#38000 probed successfully
dmesg | grep pinctrl-single
[ 1.762133] pinctrl-single 44e10800.pinmux: 142 pins, size 568
dmesg | grep gpio-of-helper
[ 1.776892] gpio-of-helper ocp:cape-universal: ready
END
Anyone an idea?
thanks
Ju

I think you are using SD Card to boot, right?
...
Well, you have two, different versions of uboot on the SD Card and on the eMMC. The eMMC holds presence over the SD Card for uboot when booting.
If you can and if you do not mind losing the data on your eMMC, you can wipe the eMMC of data or press the S2/Boot button while applying power for about 5 seconds.
Also, I thought the ADC lines on the chip were already powered when booting. I do not think you need the overlay for ADC. Please let me know if this works for you.

Related

Docker Build Process Stuck

My OS---
Ubuntu 18.04 LTS
My Docker Version--
# docker --version
Docker version 19.03.6, build 369ce74a3c
I'm trying to build a docker image--
docker build -t image:tag .
Sending build context to Docker daemon 187.9kB
Step 1/8 : FROM node:8.16.2-alpine3.9
---> 9c0651c52baf
Step 2/8 : RUN mkdir -p /app
---> Running in 85ecdcc9218c
It gets stuck on step 2 with no activity. Here's error log from syslog
dockerd[4988]: time="2020-02-20x08:28:27.xxxxxxxxxx" level=info msg="API listen on /var/run/docker.sock"
systemd[1]: Reloading.
systemd-udevd[5315]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
systemd-udevd[5315]: Could not generate persistent MAC address for vethaxxxxxx: No such file or directory
systemd-networkd[4063]: vethexxxxxx: Link UP
kernel: [ 2304.024934] docker0: port 1(vethexxxxxx) entered blocking state
kernel: [ 2304.024936] docker0: port 1(vethexxxxxx) entered disabled state
kernel: [ 2304.025182] device vethexxxxxx entered promiscuous mode
systemd-timesyncd[4039]: Network configuration changed, trying to establish connection.
systemd-udevd[5317]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
systemd-udevd[5317]: Could not generate persistent MAC address for vethexxxxxx: No such file or directory
kernel: [ 2304.029095] IPv6: ADDRCONF(NETDEV_UP): vethexxxxxx: link is not ready
systemd-timesyncd[4039]: Synchronized to time server 91.189.89.199:123 (ntp.ubuntu.com).
systemd-timesyncd[4039]: Network configuration changed, trying to establish connection.
systemd-timesyncd[4039]: Synchronized to time server 91.189.89.199:123 (ntp.ubuntu.com).
containerd[4987]: time="2020-02-20x08:31:18.xxxxxxxxxx" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/85ecdcc9218c280e97de4bfd38b0d70d83bb601e58a61a2c58fff52db2c90042/shim.sock" debug=false pid=5326
systemd-timesyncd[4039]: Network configuration changed, trying to establish connection.
systemd-timesyncd[4039]: Synchronized to time server 91.189.89.199:123 (ntp.ubuntu.com).
systemd-timesyncd[4039]: Network configuration changed, trying to establish connection.
systemd-networkd[4063]: vethexxxxxx: Gained carrier
systemd-networkd[4063]: docker0: Gained carrier
kernel: [ 2304.285614] eth0: renamed from vetha3b6298
kernel: [ 2304.285866] IPv6: ADDRCONF(NETDEV_CHANGE): vethexxxxxx: link becomes ready
kernel: [ 2304.285900] docker0: port 1(vethe0b5233) entered blocking state
kernel: [ 2304.285901] docker0: port 1(vethe0b5233) entered forwarding state
systemd-timesyncd[4039]: Synchronized to time server 91.189.89.199:123 (ntp.ubuntu.com).
systemd-networkd[4063]: vethe0b5233: Gained IPv6LL
systemd-timesyncd[4039]: Network configuration changed, trying to establish connection.
systemd-timesyncd[4039]: Synchronized to time server 91.189.89.199:123 (ntp.ubuntu.com).
Further if I press ^C to quick build process, it breaks my ssh connection too.

Docker not seeing usb /dev/ttyACM0 after unplugging and then replugging

I'm running a Docker container Ubuntu 18.04 which I use to compile code and flash IOT devices, I use this command: docker run --privileged --device=/dev/ttyACM0 -it -v disc_vol1:/root/zephyr zephyr
To run the docker container, which allows me to see the usb devices. However if I for some reason need to unplug and replug the devices, whilst the container is still running, docker no longer sees them, until I restart the container.
Is there a solution for this problem?
DMESG after unplugging and then replugging:
[388387.919792] usb 3-3: USB disconnect, device number 47
[388387.919796] usb 3-3.1: USB disconnect, device number 48
[388387.957792] FAT-fs (sdb): unable to read boot sector to mark fs as dirty
[388406.517953] usb 3-1: new high-speed USB device number 51 using xhci_hcd
[388406.666047] usb 3-1: New USB device found, idVendor=0424, idProduct=2422
[388406.666051] usb 3-1: New USB device strings: Mfr=0, Product=0, SerialNumber=0
[388406.666415] hub 3-1:1.0: USB hub found
[388406.666438] hub 3-1:1.0: 2 ports detected
[388407.881910] usb 3-1.1: new full-speed USB device number 52 using xhci_hcd
[388407.986919] usb 3-1.1: New USB device found, idVendor=0d28, idProduct=0204
[388407.986924] usb 3-1.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[388407.986927] usb 3-1.1: Product: DAPLink CMSIS-DAP
[388407.986929] usb 3-1.1: Manufacturer: ARM
[388407.986932] usb 3-1.1: SerialNumber: 1026000015afe1e800000000000000000000000097969902
[388407.987898] usb-storage 3-1.1:1.0: USB Mass Storage device detected
[388407.988131] scsi host10: usb-storage 3-1.1:1.0
[388407.991188] hid-generic 0003:0D28:0204.00A9: hiddev0,hidraw3: USB HID v1.00 Device [ARM DAPLink CMSIS-DAP] on usb-0000:00:14.0-1.1/input3
[388407.991926] cdc_acm 3-1.1:1.1: ttyACM0: USB ACM device
[388409.014753] scsi 10:0:0:0: Direct-Access MBED VFS 0.1 PQ: 0 ANSI: 2
[388409.015336] sd 10:0:0:0: Attached scsi generic sg2 type 0
[388409.015632] sd 10:0:0:0: [sdb] 131200 512-byte logical blocks: (67.2 MB/64.1 MiB)
[388409.015888] sd 10:0:0:0: [sdb] Write Protect is off
[388409.015892] sd 10:0:0:0: [sdb] Mode Sense: 03 00 00 00
[388409.016103] sd 10:0:0:0: [sdb] No Caching mode page found
[388409.016109] sd 10:0:0:0: [sdb] Assuming drive cache: write through
[388409.045555] sd 10:0:0:0: [sdb] Attached SCSI removable disk
[388482.439345] CIFS VFS: Free previous auth_key.response = 00000000df9e4b01
[388521.789341] CIFS VFS: Free previous auth_key.response = 0000000071020f34
[388554.099064] CIFS VFS: Free previous auth_key.response = 000000002a3aa60b
[388590.132004] CIFS VFS: Free previous auth_key.response = 000000009bed9fb5
[388606.372288] usb 3-1: USB disconnect, device number 51
[388606.372292] usb 3-1.1: USB disconnect, device number 52
[388606.415803] FAT-fs (sdb): unable to read boot sector to mark fs as dirty
[388622.643954] usb 3-3: new high-speed USB device number 53 using xhci_hcd
[388622.792057] usb 3-3: New USB device found, idVendor=0424, idProduct=2422
[388622.792061] usb 3-3: New USB device strings: Mfr=0, Product=0, SerialNumber=0
[388622.792451] hub 3-3:1.0: USB hub found
[388622.792479] hub 3-3:1.0: 2 ports detected
And when I do ls /dev/ttyACM0 or /dev/ttyACM1 nothing changes when it is plugged or unplugged. The problem is that I cannot flash or see the devices with for example pyocd, when I do pycod list the devices wont show up until I restart the container.
Problem
The problem lies in device node creation mechanism.
As you can read in LFS docs, in 9.3.2.2. Device Node Creation:
Device files are created by the kernel by the devtmpfs filesystem.
By comparing mount entries in host:
$ mount
...
udev on /dev type devtmpfs (rw,nosuid,noexec,relatime,size=16259904k,nr_inodes=4064976,mode=755,inode64)
...
...and in container:
# mount
...
tmpfs on /dev type tmpfs (rw,nosuid,size=65536k,mode=755,inode64)
...
...you can notice that /dev filesystem in the container isn't the same thing as it is in the host.
It seems to me that privileged docker container recreates /dev structure while staring. Later, kernel does create device node in devtmpfs, but as long as the container uses separate filesystem for devices, the node isn't created there. As a confirmation, you can notice that after unplugging the device (the one that was connected before container started), its node still persists inside container, but disappears from the host.
Solution
You can workaround it by creating the node manually. In this example I plugged in /dev/ttyUSB1 while container was running.
On the host machine find major and minor device number:
$ ls -la /dev/ttyUSB*
crw-rw----+ 1 root plugdev 188, 0 gru 5 15:25 /dev/ttyUSB0
crw-rw----+ 1 root plugdev 188, 1 gru 5 15:26 /dev/ttyUSB1
# ^^^^^^ major and minor number
And create corresponding node inside container:
# ll /dev/ttyUSB*
crw-rw---- 1 root plugdev 188, 0 Dec 5 14:25 /dev/ttyUSB0
# mknod /dev/ttyUSB1 c 188 1
# ll /dev/ttyUSB*
crw-rw---- 1 root plugdev 188, 0 Dec 5 14:25 /dev/ttyUSB0
crw-r--r-- 1 root root 188, 1 Dec 5 15:16 /dev/ttyUSB1
The device should work.
Enhancement
You can also automate node creation by installing udev and writing some custom nodes inside container.
I found this repo that successfully sets up udev instance inside container - udevadm monitor correctly reflects udev events compared to host.
The last thing is to write some udev rules that will automagically create corresponding nodes inside the container:
ACTION=="add", RUN+="mknod %N c %M %m"
ACTION=="remove", RUN+="rm %N"
I haven't tested it yet, but I can see no reason that it will not work.
Better enhancement
You don't neet to install udev inside the container. You can run mknod there from script that runs on host machine (on host's udev trigger), as it's described here. It would be good to handle removing nodes as well.

Sharing PCIe virtual functions available on the host with a docker container

My host is running in SRIOV mode and has several physical devices that appear on the PCIe bus. Each physical function has a collection of 32 virtual functions. I want to share one of the virtual function with a docker container. These are crypto/compression accelerators, and I wrote a driver for one; so I'm familiar with SRIOV when I'm dealing with bare-metal or SRIOV hypervisors launching virtual machines. But now I'm trying to get access to the virtual functions inside a docker container.
On the host I can lspci and see my physical and virtual devices. But when I launch a container, all I see from within the container are the physical functions.
I have seen the "--device" parameter for "docker run", but I don't think it will work for passing a virtual function to a container.
Logistically, here's what I see on the host:
[localhost] config # lspci | grep "^85" | head -4
85:00.0 Co-processor: Intel Corporation DH895XCC Series QAT
85:01.0 Co-processor: Intel Corporation DH895XCC Series QAT Virtual Function
85:01.1 Co-processor: Intel Corporation DH895XCC Series QAT Virtual Function
85:01.2 Co-processor: Intel Corporation DH895XCC Series QAT Virtual Function
[localhost] config # lspci | grep "^85" | wc
33 295 2524
So we have 1 physical function at 85:00.0, and 32 virtuals.
But when I start the container and do the same examination from inside the container, all I see is the following:
[localhost] config # lspci | grep QAT
04:00.0 Co-processor: Intel Corporation DH895XCC Series QAT
05:00.0 Co-processor: Intel Corporation DH895XCC Series QAT
85:00.0 Co-processor: Intel Corporation DH895XCC Series QAT
I've been told that this can be made to work: I can send in virtuals into the container, and my driver can do the rest.
My question: how can I pass virtual functions from the host into a container?
As mentioned in the comment (but with the flag name):
docker run -it --rm --cap-add=SYS_RAWIO ...
Then try lspci from inside the container again.

default-address-pools is not recognized by docker

I am trying to configure docker-compose to use different network range by default, so I follow instructions from https://github.com/moby/moby/pull/29376
However, I get following error:
unable to configure the Docker docker daemon with file
/etc/docker/daemon.json: the following directives don't match any
configuration option: default-address-pools
Here is the content of daemon.json - it is the sample taken from the #29376.
{
"default-address-pools": [
{
"scope": "local",
"base": "172.80.0.0/16",
"size": 24
},
{
"scope": "global",
"base": "172.90.0.0/16",
"size": 24
}
]
}
Please advise.
My env:
# uname -a
Linux gfn-classroom 4.4.0-109-generic #132-Ubuntu SMP Tue Jan 9 19:52:39 UTC
2018 x86_64 x86_64 x86_64 GNU/Linux
# docker --version
Docker version 17.12.0-ce, build c97c6d6
lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.3 LTS
Release: 16.04
Codename: xenial
This is merged in https://github.com/moby/moby/pull/36396 and (hopefully) will be available in 18.06. [Reference]
Also note changing default address pool is also available as a cli argument, e.g.:
/usr/bin/dockerd -H ... --default-address-pool base=172.29.0.0,size=16
Pull-request https://github.com/moby/moby/pull/29376 was closed, not merged, so that feature is not available (yet) in Docker

DashDB Local Docker Deployment

I have been able to deploy DashDB local (SMP) locally on my mac (using Kite) 3-4 months ago, but recently I am not able to successfully deploy either SMP or MPP using MacOS (Kite) or Linux (on AWS using individual instances with docker running - not swarm).
Linux flavor (Default Amazon Linux AMI)
[ec2-user#ip-10-0-0-171 ~]$ cat /etc/*-release
NAME="Amazon Linux AMI"
VERSION="2016.03"
ID="amzn"
ID_LIKE="rhel fedora"
VERSION_ID="2016.03"
PRETTY_NAME="Amazon Linux AMI 2016.03"
ANSI_COLOR="0;33"
CPE_NAME="cpe:/o:amazon:linux:2016.03:ga"
HOME_URL="http://aws.amazon.com/amazon-linux-ami/"
Amazon Linux AMI release 2016.03
Linux Kernel
[ec2-user#ip-10-0-0-171 ~]$ sudo uname -r
4.4.11-23.53.amzn1.x86_64
Docker Version
[ec2-user#ip-10-0-0-171 ~]$ docker --version
Docker version 1.11.2, build b9f10c9/1.11.2
hostname
[ec2-user#ip-10-0-0-171 ~]$ hostname
ip-10-0-0-171
dnsdomainname
[ec2-user#ip-10-0-0-171 ~]$ dnsdomainname
ec2.internal
In every variant approach I always end up with something like the message below after running:
docker run -d -it --privileged=true --net=host --name=dashDB -v /mnt/clusterfs:/mnt/bludata0 -v /mnt/clusterfs:/mnt/blumeta0 ibmdashdb/preview:latest
(for SMP) or docker exec -it dashDB start (after the run command for MPP). I tried using the getlogs, but couldn't find anything interesting. Any ideas? For SMP I am using a created directory on single host, for MPP I am using AWS' EFS for a shared NFS mount.
[ec2-user#ip-10-0-0-171 ~]$ docker logs --follow dashDB
/mnt/bludata0/nodes cannot be found. We will continue with a single-node deployment.
Checking if dashDB initialize has been done previously ...
dashDB stack is NOT initialized yet.
#####################################################################
Running dashDB prerequisite checks on node: ip-10-0-0-171
#####################################################################
#####################################################################
Prerequisite check -- Minimum Memory requirement
#####################################################################
* Memory check: PASS
#####################################################################
Prerequisite check -- Minimum data volume free-space requirement
#####################################################################
* Free space in data volume check: PASS
#####################################################################
Prerequisite check -- Minimum number of CPU/CPU core requirement
#####################################################################
* CPU check: PASS
#####################################################################
Prerequisite check -- Data Volume device DIO requirement
#####################################################################
* DIO check: PASS
#####################################################################
Prerequisite check -- Data Volume device I/O stats
#####################################################################
Testing WRITE I/O performance of the data volume device
32768+0 records in
32768+0 records out
134217728 bytes (134 MB) copied, 33.7435 s, 4.0 MB/s
real 0m33.746s
user 0m0.008s
sys 0m12.040s
Testing READ I/O performance of the data volume device
32768+0 records in
32768+0 records out
134217728 bytes (134 MB) copied, 10.8286 s, 12.4 MB/s
real 0m10.831s
user 0m0.116s
sys 0m0.344s
######################################################################
*************************************************
Prerequisite check summary for Node: ip-10-0-0-171
*************************************************
* Memory check: PASS
* Free space in data volume check: PASS
* CPU check: PASS
* DIO check: PASS
*********************************************
I/O perf test summary for Node: ip-10-0-0-171
*********************************************
* Read throughput: 12.4 MB/s
* Write throughput: 4.0 MB/s
######################################################################
Creating dashDB directories and dashDB instance
Starting few of the key services ...
Generating /etc/rndc.key: [ OK ]
Starting named: [ OK ]
Starting saslauthd: [ OK ]
Starting sendmail: [ OK ]
Starting sm-client: [ OK ]
Setting dsserver Config
Setting openldap
Starting slapd: [ OK ]
Starting sssd: [ OK ]
Starting system logger: [ OK ]
Starting nscd: [ OK ]
Update dsserver with ldap info
dashDB set configuration
Setting database configuration
database SSL configuration
-bludb_ssl_keystore_password
-bludb_ssl_certificate_label
UPDATED: /opt/ibm/dsserver/Config/dswebserver.properties
set dashDB Encryption
Setting up keystore
dashDB failed to stop on ip-10-0-0-171 because database services didn't stop.
Retry the operation. If the same failure occurs, contact IBM Service.
If a command prompt is not visible on your screen, you need to detach from the container by typing Ctrl-C.
Stop/Start
[ec2-user#ip-10-0-0-171 ~]$ docker exec -it dashDB stop
Attempt to shutdown services on node ip-10-0-0-171 ...
dsserver_home: /opt/ibm/dsserver
port: -1
https.port: 8443
status.port: 11082
SERVER STATUS: INACTIVE
httpd: no process killed
Instance is already in stopped state due to which database consistency can't be checked
###############################################################################
Successfully stopped dashDB
###############################################################################
[ec2-user#ip-10-0-0-171 ~]$ docker stop dashDB
dashDB
[ec2-user#ip-10-0-0-171 ~]$ docker start dashDB
dashDB
[ec2-user#ip-10-0-0-171 ~]$ docker logs --follow dashDB
Follow the logs again
[ec2-user#ip-10-0-0-171 ~]$ docker logs --follow dashDB
....SAME INFO FROM BEFORE...
/mnt/bludata0/nodes cannot be found. We will continue with a single-node deployment.
Checking if dashDB initialize has been done previously ...
dashDB stack is NOT initialized yet.
#####################################################################
Running dashDB prerequisite checks on node: ip-10-0-0-171
#####################################################################
#####################################################################
Prerequisite check -- Minimum Memory requirement
#####################################################################
* Memory check: PASS
#####################################################################
Prerequisite check -- Minimum data volume free-space requirement
#####################################################################
* Free space in data volume check: PASS
#####################################################################
Prerequisite check -- Minimum number of CPU/CPU core requirement
#####################################################################
* CPU check: PASS
#####################################################################
Prerequisite check -- Data Volume device DIO requirement
#####################################################################
* DIO check: PASS
#####################################################################
Prerequisite check -- Data Volume device I/O stats
#####################################################################
Testing WRITE I/O performance of the data volume device
32768+0 records in
32768+0 records out
134217728 bytes (134 MB) copied, 34.5297 s, 3.9 MB/s
real 0m34.532s
user 0m0.020s
sys 0m11.988s
Testing READ I/O performance of the data volume device
32768+0 records in
32768+0 records out
134217728 bytes (134 MB) copied, 10.8309 s, 12.4 MB/s
real 0m10.833s
user 0m0.000s
sys 0m0.432s
######################################################################
*************************************************
Prerequisite check summary for Node: ip-10-0-0-171
*************************************************
* Memory check: PASS
* Free space in data volume check: PASS
* CPU check: PASS
* DIO check: PASS
*********************************************
I/O perf test summary for Node: ip-10-0-0-171
*********************************************
* Read throughput: 12.4 MB/s
* Write throughput: 3.9 MB/s
######################################################################
Creating dashDB directories and dashDB instance
mv: cannot stat `/tmp/bashrc_db2inst1': No such file or directory
mv: cannot stat `/tmp/bash_profile_db2inst1': No such file or directory
Starting few of the key services ...
Starting named: [ OK ]
Starting saslauthd: [ OK ]
Starting sendmail: [ OK ]
Setting dsserver Config
mv: cannot stat `/tmp/dswebserver.properties': No such file or directory
Setting openldap
/bin/sh: /tmp/ldap-directories.sh: No such file or directory
cp: cannot stat `/tmp/cn=config.ldif': No such file or directory
mv: cannot stat `/tmp/olcDatabase0bdb.ldif': No such file or directory
cp: cannot stat `/tmp/slapd-sha2.so': No such file or directory
mv: cannot stat `/tmp/cn=module0.ldif': No such file or directory
ln: creating hard link `/var/run/slapd.pid': File exists [ OK ]
Starting sssd: [ OK ]
Starting system logger: [ OK ]
Starting nscd: [ OK ]
Update dsserver with ldap info
dashDB set configuration
Setting database configuration
database SSL configuration
-bludb_ssl_keystore_password
-bludb_ssl_certificate_label
UPDATED: /opt/ibm/dsserver/Config/dswebserver.properties
set dashDB Encryption
dashDB failed to stop on ip-10-0-0-171 because database services didn't stop.
Retry the operation. If the same failure occurs, contact IBM Service.
If a command prompt is not visible on your screen, you need to detach from the container by typing Ctrl-C.
Thank you for testing dashDB Local.
MPP is only supported on Linux.
SMP on Mac is only supported using Kitematic with Docker Toolbox v1.11.1b and using the 'v1.0.0-kitematic' tag image, not 'latest'.
To help you further I'd like to focus on a single environment and for simplicity let's start with SMP on Linux and we can later discuss MPP.
Check the minimum requirements for an SMP installation:
Processor 2.0 GHz core
Memory 8 GB RAM
Storage 20 GB
What is the Linux flavor you use? Check with:
cat /etc/*-release
Make sure you have at least a Linux kernel 3.10. You can check with:
$ uname -r
3.10.0-229.el7.x86_64
Then let's find out what version of docker is installed with:
$ docker --version
Docker version 1.12.1, build 23cf638
Additionally you need to configure a hostname and domain name. You can verify that you have these with:
$ hostname
and
$ dnsdomainname
Also ensure all the required ports are open, the list is long. Check our documentation.
Is this system virtual or physical?
Can you show the entire output of as well as all above checks:
$ docker logs -–follow dashDB
Try the following steps which if all else is correct may help resolve this issue. Once you see the error:
$ docker exec -it dashDB stop
$ docker stop dashDB
$ docker start dashDB

Resources