ceph mount failed with (95) Operation not supported - storage

I installed ceph on servers "A" and "B" and I would like to mount it from "C" or "D" servers.
But I faced below error.
ceph-fuse[4628]: ceph mount failed with (95) Operation not supported
My server configuration is as follow.
A Server: ubunt16.04(ceph-server) 10.1.1.54
B Server: ubuntu16.04(ceph-server) 10.1.1.138
C Server: AmazonLinux(clinet)
D Server: ubuntu16.04(client)
and ceph.conf
[global]
fsid = 44f299ac-ff11-41c8-ab96-225d62cb3226
mon_initial_members = node01, node02
mon_host = 10.1.1.54,10.1.1.138
auth cluster required = none
auth service required = none
auth client required = none
auth supported = none
osd pool default size = 2
public network = 10.1.1.0/24
Ceph is also installed correctly.
ceph health
HEALTH_OK
ceph -s
cluster 44f299ac-ff11-41c8-ab96-225d62cb3226
health HEALTH_OK
monmap e1: 2 mons at {node01=10.1.1.54:6789/0,node02=10.1.1.138:6789/0}
election epoch 12, quorum 0,1 node01,node02
osdmap e41: 2 osds: 2 up, 2 in
flags sortbitwise,require_jewel_osds
pgmap v100: 64 pgs, 1 pools, 306 bytes data, 4 objects
69692 kB used, 30629 MB / 30697 MB avail
64 active+clean
An error occurred when using the ceph-fuse command.
sudo ceph-fuse -m 10.1.1.138:6789 /mnt/mycephfs/ --debug-auth=10 --debug-ms=10
ceph-fuse[4628]: starting ceph client
2017-11-02 08:57:22.905630 7f8cfdd60f00 -1 init, newargv = 0x55779de6af60 newargc=11
ceph-fuse[4628]: ceph mount failed with (95) Operation not supported
ceph-fuse[4626]: mount failed: (95) Operation not supported
I got an error saying "ceph mount failed with (95) Operation not supported"
I added an option "--auth-client-required=none"
sudo ceph-fuse -m 10.1.1.138:6789 /mnt/mycephfs/ --debug-auth=10 --debug-ms=10 --auth-client-required=none
ceph-fuse[4649]: starting ceph client
2017-11-02 09:03:47.501363 7f1239858f00 -1 init, newargv = 0x5597621eaf60 newargc=11
Behavior changed, There is no response here.
I got an below error if ceph-fuse command is not used.
sudo mount -t ceph 10.1.1.138:6789:/ /mnt/mycephfs
can't read superblock
Somehow, it seems necessary to authenticate with client even with "auth supported = none"
In that case, how could you pass authentication form servers "c" or "d"?
Please let me know, If there is possible cause other than authentication.

I thought that you need more steps such as format the file system, so
you should check again your installation steps for your purposes, Ceph has multiple components for each services, such as object storage, block storage, file system and API. And each service was required its configuration steps.
This installation gude is helpful for your cases.
https://github.com/infn-bari-school/cloud-storage-tutorials/wiki/Ceph-cluster-installation-(jewel-on-CentOS)
If you want to build the Ceph file system for testing, you can build the small size CephFS as following installation steps.
I'll skip the details of the steps and CLI usages, you can get more information from the official documents.
Environment informations
Ceph version: Jewel, 10.2.9
OS: CentOS 7.4
Prerequisite before installation of Ceph file system.
Required this configuration 4 nodes,
ceph-admin node: deploy monitor, metadata server
ceph-osd0: osd service
ceph-osd1: osd service
ceph-osd2: osd service
Enabling NTP - all nodes
The OS user for deploying ceph compnents required escalation privileges setting (e.g. sudoers)
SSH public key configuration (directions: ceph-admin -> OSD nodes)
Installation of ceph-deploy tool on ceph-admin Admin node.
# yum install -y ceph-deploy
Deploying the required the Ceph components for the Ceph file system
Create the cluster on ceph-admin Admin node using normal OS user (for deploying ceph components)
$ mkdir ./cluster
$ cd ./cluster
$ ceph-deploy new ceph-admin
modify the ceph.conf into the cluster directory.
$ vim ceph.conf
[global]
..snip...
mon_initial_members = ceph-admin
mon_host = $MONITORSERVER_IP_OR_HOSTNAME
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
# the number of replicas for objects in the pool, default value is 3
osd pool default size = 3
public network = $YOUR_SERVICE_NETWORK_CIDR
installing monitor and osd services to related nodes.
$ ceph-deploy install --release jewel ceph-admin ceph-osd0 ceph-osd1 ceph-osd2
initiate monitor service
$ ceph-deploy mon create-initial
Create the OSD devices
ceph-deploy osd list ceph-osd{0..2}:vdb
Adding metadata server component for Ceph file system service.
Adding metadata server (this service just required only Ceph file system)
ceph-deploy mds create ceph-admin
check the status
ceph mds stat
create the pools for cephFS
ceph osd pool create cephfs_data_pool 64
ceph osd pool create cephfs_meta_pool 64
Create the ceph file systems
ceph fs new cephfs cephfs_meta_pool cephfs_data_pool
Mount the Ceph file system
Required ceph-fuse package on the node for mounting.
mount as the cephFS
ceph-fuse -m MONITOR_SERVER_IP_OR_HOSTNAME:PORT_NUMBER <LOCAL_MOUNTPOINT>
End...

I solved this problem by fixing three settings.
1.
The auth settings in ceph.conf returned as follows
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
2.
public network was wrong.
public network = 10.1.1.0/24
↓
public network = 10.0.0.0/8
my client ip address was 10.1.0.238...
It was a stupid mistake.
3.
I changed secret option to secretfile option and everything was fine.
In this case, it was failed.
sudo mount -t ceph 10.1.1.138:6789:/ /mnt/mycephfs -o name=client.admin,secret=`sudo ceph-authtool -p /etc/ceph/ceph.client.admin.keyring`
output:
mount error 1 = Operation not permitted
but in this case, It was success.
sudo mount -vvvv -t ceph 10.1.1.138:6789:/ /mnt/mycephfs -o name=admin,secretfile=admin.secret
output:
parsing options: rw,name=admin,secretfile=admin.secret
mount: error writing /etc/mtab: Invalid argument
※ Invalid argument The error seems to be ignored.
Apparently, Both are the same key.
sudo ceph-authtool -p /etc/ceph/ceph.client.admin.keyring
AQBd9f9ZSL46MBAAqwepJDC5tuIL/uYp0MXjCA==
cat admin.secret
AQBd9f9ZSL46MBAAqwepJDC5tuIL/uYp0MXjCA==
I don't know reason,but I could mount using secretfile option.

Related

Kubernetes garbage collection clean docker components

Currently running a k8s cluster however occasionally I get memory issues. The following error will pop up,
Failed create pod sandbox: rpc error: code = Unknown desc = failed to create a sandbox for pod "<web app>": Error response from daemon: devmapper: Thin Pool has 6500 free data blocks which is less than minimum required 7781 free data blocks. Create more free space in thin pool or use dm.min_free_space option to change behavior
I can resolve this by manually running docker ps -a -f status=exited -q | xargs -r docker rm -v
However I want Kubernetes to do this work itself. Currently in my kublet config I have:
evictionHard:
imagefs.available: 15%
memory.available: "100Mi"
nodefs.available: 10%
nodefs.inodesFree: 5%
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
What am i doing wrong?
Reading the error you've posted seems to me you are using "devicemapper" as storage driver.
The devicemapper storage driver is deprecated in Docker Engine 18.09, and will be removed in a future release. It is recommended that users of the devicemapper storage driver migrate to overlay2.
I should suggest you use "overlay2" as storage drive, unless you are running a non-support OS. See here the support OS versions.
You can check your actual storage drive using docker info command, you will get an output like this:
Client:
Debug Mode: false
Server:
Containers: 21
Running: 18
Paused: 0
Stopped: 3
Images: 11
Server Version: 19.03.5
Storage Driver: devicemapper <<== See here
Pool Name: docker-8:1-7999625-pool
Pool Blocksize: 65.54kB
...
>
Supposing you want to change the storage drive from devicemapper to overlay2, you need to following this steps:
Changing the storage driver makes existing containers and images inaccessible on the local system. Use docker save to save any images you have built or push them to Docker Hub or a private registry before changing the storage driver, so that you do not need to re-create them later.
Before following this procedure, you must first meet all the prerequisites.
Stop Docker.
$ sudo systemctl stop docker
Copy the contents of /var/lib/docker to a temporary location.
$ cp -au /var/lib/docker /var/lib/docker.bk
If you want to use a separate backing filesystem from the one used by /var/lib/, format the filesystem and mount it into /var/lib/docker. Make sure add this mount to /etc/fstab to make it permanent.
Edit /etc/docker/daemon.json. If it does not yet exist, create it. Assuming that the file was empty, add the following contents.
{
"storage-driver": "overlay2"
}
Docker does not start if the daemon.json file contains badly-formed JSON.
Start Docker.
$ sudo systemctl start docker
Verify that the daemon is using the overlay2 storage driver. Use the docker info command and look for Storage Driver and Backing filesystem.
Client:
Debug Mode: false
Server:
Containers: 35
Running: 15
Paused: 0
Stopped: 20
Images: 11
Server Version: 19.03.5
Storage Driver: overlay2 <=== HERE
Backing Filesystem: extfs <== HERE
Supports d_type: true
Extracted from Docker Documentation.

Can I run k8s master INSIDE a docker container? Getting errors about k8s looking for host's kernel details

In a docker container I want to run k8s.
When I run kubeadm join ... or kubeadm init commands I see sometimes errors like
\"modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could
not open moddep file
'/lib/modules/3.10.0-1062.1.2.el7.x86_64/modules.dep.bin'.
nmodprobe:
FATAL: Module configs not found in directory
/lib/modules/3.10.0-1062.1.2.el7.x86_64",
err: exit status 1
because (I think) my container does not have the expected kernel header files.
I realise that the container reports its kernel based on the host that is running the container; and looking at k8s code I see
// getKernelConfigReader search kernel config file in a predefined list. Once the kernel config
// file is found it will read the configurations into a byte buffer and return. If the kernel
// config file is not found, it will try to load kernel config module and retry again.
func (k *KernelValidator) getKernelConfigReader() (io.Reader, error) {
possibePaths := []string{
"/proc/config.gz",
"/boot/config-" + k.kernelRelease,
"/usr/src/linux-" + k.kernelRelease + "/.config",
"/usr/src/linux/.config",
}
so I am bit confused what is simplest way to run k8s inside a container such that it consistently past this getting the kernel info.
I note that running docker run -it solita/centos-systemd:7 /bin/bash on a macOS host I see :
# uname -r
4.9.184-linuxkit
# ls -l /proc/config.gz
-r--r--r-- 1 root root 23834 Nov 20 16:40 /proc/config.gz
but running exact same on a Ubuntu VM I see :
# uname -r
4.4.0-142-generic
# ls -l /proc/config.gz
ls: cannot access /proc/config.gz
[Weirdly I don't see this FATAL: Module configs not found in directory error every time, but I guess that is a separate question!]
UPDATE 22/November/2019. I see now that k8s DOES run okay in a container. Real problem was weird/misleading logs. I have added an answer to clarify.
I do not believe that is possible given the nature of containers.
You should instead test your app in a docker container then deploy that image to k8s either in the cloud or locally using minikube.
Another solution is to run it under kind which uses docker driver instead of VirtualBox
https://kind.sigs.k8s.io/docs/user/quick-start/
It seems the FATAL error part was a bit misleading.
It was badly formatted by my test environment (all on one line.
When k8s was failing I saw the FATAL and assumed (incorrectly) that was root cause.
When I format the logs nicely I see ...
kubeadm join 172.17.0.2:6443 --token 21e8ab.1e1666a25fd37338 --discovery-token-unsafe-skip-ca-verification --experimental-control-plane --ignore-preflight-errors=all --node-name 172.17.0.3
[preflight] Running pre-flight checks
[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.4.0-142-generic
DOCKER_VERSION: 18.09.3
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.3. Latest validated version: 18.06
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.4.0-142-generic/modules.dep.bin'\nmodprobe: FATAL: Module configs not found in directory /lib/modules/4.4.0-142-generic\n", err: exit status 1
[discovery] Trying to connect to API Server "172.17.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.2:6443"
[discovery] Failed to request cluster info, will try again: [the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps cluster-info)]
There are other errors later, which I originally though were a side-effect of the nasty looking FATAL error e.g. .... "[util/etcd] Attempt timed out"]} but I now think root cause is Etcd part times out sometimes.
Adding this answer in case someone else puzzled like I was.

docker run hello-world results in "Incorrect Usage" error: "flag provided but not defined: -console"

When running docker run hello-world I get an "Incorrect Usage" error (full output pasted below). I'm running the following:
Docker 17.05.0-ce, build 89658be
docker-containerd 0.2.3 (commit 9048e5e)
runc v1.0.0-rc4
Linux kernel 4.1.15
Using buildroot 2017.11 (commit 1f1a242) to generate custom toolchain/rootfs
systemd 234
Seems as though I can pull the hello-world image down properly, as it is included in docker images output. Wondering if there is an incompatibility between docker/containerd/runc? Or maybe something obvious? First time working with docker.
Additionally, I've run a docker check-config.sh script I found that states the only kernel configuration features I'm missing are optional. They are CONFIG_CGROUP_PIDS, CONFIG_CGROUP_HUGETLB, CONFIG_AUFS_FS, /dev/zfs, zfs command, and zpool command. Everything else, including all required, are enabled.
Output:
# docker run hello-world
[ 429.332968] device vethc0d83d1 entered promiscuous mode
[ 429.359681] IPv6: ADDRCONF(NETDEV_UP): vethc0d83d1: link is not ready
Incorrect Usage.
NAME:
docker-runc create - create a container
USAGE:
docker-runc create [command options] <container-id>
Where "<container-id>" is your name for the instance of the container that you
are starting. The name you provide for the container instance must be unique on
your host.
DESCRIPTION:
The create command creates an instance of a container for a bundle. The bundle
is a directory with a specification file named "config.json" and a root
filesystem.
The specification file includes an args parameter. The args parameter is used
to specify command(s) that get run when the container is started. To change the
command(s) that get executed on start, edit the args parameter of the spec. See
"runc spec --help" for more explanation.
OPTIONS:
--bundle value, -b value path to the root of the bundle directory, defaults to the current directory
--console-socket value path to an AF_UNIX socket which will receive a file descriptor referencing the master end of the console's pseudoterminal
--pid-file value specify the file to write the process id to
--no-pivot do not use pivot root to jail process inside rootfs. This should be used whenever the rootfs is on top of a ramdisk
--no-new-keyring do not create a new session keyring for the container. This will cause the container to inherit the calling processes session key
--preserve-fds value Pass N additional file descriptors to the container (stdio + $LISTEN_FDS + N in total) (default: 0)
flag provided but not defined: -console
[ 429.832198] docker0: port 1(vethc0d83d1) entered disabled state
[ 429.849301] device vethc0d83d1 left promiscuous mode
[ 429.859317] docker0: port 1(vethc0d83d1) entered disabled state
docker: Error response from daemon: oci runtime error: flag provided but not defined: -console.
The -console option was replaced with --console-socket in runc Dec 2016 for v1.0.0-rc4.
So I would guess you need an older version of runc or a newer version of Docker.
If you are building Docker yourself, use Docker 17.09.0-ce or an older release of runc. I'm not sure if that's v0.1.1 or just an earlier 1.0 like v1.0.0-rc2
If you were upgrading packages, something has gone wrong with the install. Probably purge everything and reinstall Docker.

dashDB local MPP deployment issue - cannot connect to database

I am facing a huge problem at deploying a dashDB local cluster. After a successful deployment the following error comes in case of trying to create a single table or launch a query. Furthermore webserver is not working properly like in previous SMP deployment.
Cannot connect to database "BLUDB" on node "20" because the difference
between the system time on the catalog node and the virtual timestamp
on this node is greater than the max_time_diff database manager
configuration parameter.. SQLCODE=-1472, SQLSTATE=08004,
DRIVER=4.18.60
I followed official deployment guide, so followings were doublechecked:
each physical machines' and docker containers' /etc/hosts file contains all ips, fully qualified and simple hostnames
there is a NFS preconfigured and mounted to /mnt/clusterfs on every single server
none of the servers signed an error at phase "docker logs --follow dashDB" command
nodes config file is located in /mnt/clusterfs directory
After starting dashDB with following command:
docker exec -it dashDB start
It looks as it should be (see below), but the error can be found at /opt/ibm/dsserver/logs/dsserver.0.log.
#
--- dashDB stack service status summary ---
##################################################################### Redirecting to /bin/systemctl status slapd.service
SUMMARY
LDAPrunning: SUCCESS
dashDBtablesOnline: SUCCESS
WebConsole : SUCCESS
dashDBconnectivity : SUCCESS
dashDBrunning : SUCCESS
#
--- dashDB high availability status ---
#
Configuring dashDB high availability ... Stopping the system Stopping
datanode dashdb02 Stopping datanode dashdb01 Stopping headnode
dashdb03 Running sm on head node dashdb03 .. Running sm on data node
dashdb02 .. Running sm on data node dashdb01 .. Attempting to activate
previously failed nodes, if any ... SM is RUNNING on headnode dashdb03
(ACTIVE) SM is RUNNING on datanode dashdb02 (ACTIVE) SM is RUNNING on
datanode dashdb01 (ACTIVE) Overall status : RUNNING
After several redeployment nothing has changed. Please help me in what I am doing wrong.
Many Thanks, Daniel
Always make sure to NTP service is started on every single cluster node before starting a docker container. Otherwise it will take no effect on it.

DashDB Local Docker Deployment

I have been able to deploy DashDB local (SMP) locally on my mac (using Kite) 3-4 months ago, but recently I am not able to successfully deploy either SMP or MPP using MacOS (Kite) or Linux (on AWS using individual instances with docker running - not swarm).
Linux flavor (Default Amazon Linux AMI)
[ec2-user#ip-10-0-0-171 ~]$ cat /etc/*-release
NAME="Amazon Linux AMI"
VERSION="2016.03"
ID="amzn"
ID_LIKE="rhel fedora"
VERSION_ID="2016.03"
PRETTY_NAME="Amazon Linux AMI 2016.03"
ANSI_COLOR="0;33"
CPE_NAME="cpe:/o:amazon:linux:2016.03:ga"
HOME_URL="http://aws.amazon.com/amazon-linux-ami/"
Amazon Linux AMI release 2016.03
Linux Kernel
[ec2-user#ip-10-0-0-171 ~]$ sudo uname -r
4.4.11-23.53.amzn1.x86_64
Docker Version
[ec2-user#ip-10-0-0-171 ~]$ docker --version
Docker version 1.11.2, build b9f10c9/1.11.2
hostname
[ec2-user#ip-10-0-0-171 ~]$ hostname
ip-10-0-0-171
dnsdomainname
[ec2-user#ip-10-0-0-171 ~]$ dnsdomainname
ec2.internal
In every variant approach I always end up with something like the message below after running:
docker run -d -it --privileged=true --net=host --name=dashDB -v /mnt/clusterfs:/mnt/bludata0 -v /mnt/clusterfs:/mnt/blumeta0 ibmdashdb/preview:latest
(for SMP) or docker exec -it dashDB start (after the run command for MPP). I tried using the getlogs, but couldn't find anything interesting. Any ideas? For SMP I am using a created directory on single host, for MPP I am using AWS' EFS for a shared NFS mount.
[ec2-user#ip-10-0-0-171 ~]$ docker logs --follow dashDB
/mnt/bludata0/nodes cannot be found. We will continue with a single-node deployment.
Checking if dashDB initialize has been done previously ...
dashDB stack is NOT initialized yet.
#####################################################################
Running dashDB prerequisite checks on node: ip-10-0-0-171
#####################################################################
#####################################################################
Prerequisite check -- Minimum Memory requirement
#####################################################################
* Memory check: PASS
#####################################################################
Prerequisite check -- Minimum data volume free-space requirement
#####################################################################
* Free space in data volume check: PASS
#####################################################################
Prerequisite check -- Minimum number of CPU/CPU core requirement
#####################################################################
* CPU check: PASS
#####################################################################
Prerequisite check -- Data Volume device DIO requirement
#####################################################################
* DIO check: PASS
#####################################################################
Prerequisite check -- Data Volume device I/O stats
#####################################################################
Testing WRITE I/O performance of the data volume device
32768+0 records in
32768+0 records out
134217728 bytes (134 MB) copied, 33.7435 s, 4.0 MB/s
real 0m33.746s
user 0m0.008s
sys 0m12.040s
Testing READ I/O performance of the data volume device
32768+0 records in
32768+0 records out
134217728 bytes (134 MB) copied, 10.8286 s, 12.4 MB/s
real 0m10.831s
user 0m0.116s
sys 0m0.344s
######################################################################
*************************************************
Prerequisite check summary for Node: ip-10-0-0-171
*************************************************
* Memory check: PASS
* Free space in data volume check: PASS
* CPU check: PASS
* DIO check: PASS
*********************************************
I/O perf test summary for Node: ip-10-0-0-171
*********************************************
* Read throughput: 12.4 MB/s
* Write throughput: 4.0 MB/s
######################################################################
Creating dashDB directories and dashDB instance
Starting few of the key services ...
Generating /etc/rndc.key: [ OK ]
Starting named: [ OK ]
Starting saslauthd: [ OK ]
Starting sendmail: [ OK ]
Starting sm-client: [ OK ]
Setting dsserver Config
Setting openldap
Starting slapd: [ OK ]
Starting sssd: [ OK ]
Starting system logger: [ OK ]
Starting nscd: [ OK ]
Update dsserver with ldap info
dashDB set configuration
Setting database configuration
database SSL configuration
-bludb_ssl_keystore_password
-bludb_ssl_certificate_label
UPDATED: /opt/ibm/dsserver/Config/dswebserver.properties
set dashDB Encryption
Setting up keystore
dashDB failed to stop on ip-10-0-0-171 because database services didn't stop.
Retry the operation. If the same failure occurs, contact IBM Service.
If a command prompt is not visible on your screen, you need to detach from the container by typing Ctrl-C.
Stop/Start
[ec2-user#ip-10-0-0-171 ~]$ docker exec -it dashDB stop
Attempt to shutdown services on node ip-10-0-0-171 ...
dsserver_home: /opt/ibm/dsserver
port: -1
https.port: 8443
status.port: 11082
SERVER STATUS: INACTIVE
httpd: no process killed
Instance is already in stopped state due to which database consistency can't be checked
###############################################################################
Successfully stopped dashDB
###############################################################################
[ec2-user#ip-10-0-0-171 ~]$ docker stop dashDB
dashDB
[ec2-user#ip-10-0-0-171 ~]$ docker start dashDB
dashDB
[ec2-user#ip-10-0-0-171 ~]$ docker logs --follow dashDB
Follow the logs again
[ec2-user#ip-10-0-0-171 ~]$ docker logs --follow dashDB
....SAME INFO FROM BEFORE...
/mnt/bludata0/nodes cannot be found. We will continue with a single-node deployment.
Checking if dashDB initialize has been done previously ...
dashDB stack is NOT initialized yet.
#####################################################################
Running dashDB prerequisite checks on node: ip-10-0-0-171
#####################################################################
#####################################################################
Prerequisite check -- Minimum Memory requirement
#####################################################################
* Memory check: PASS
#####################################################################
Prerequisite check -- Minimum data volume free-space requirement
#####################################################################
* Free space in data volume check: PASS
#####################################################################
Prerequisite check -- Minimum number of CPU/CPU core requirement
#####################################################################
* CPU check: PASS
#####################################################################
Prerequisite check -- Data Volume device DIO requirement
#####################################################################
* DIO check: PASS
#####################################################################
Prerequisite check -- Data Volume device I/O stats
#####################################################################
Testing WRITE I/O performance of the data volume device
32768+0 records in
32768+0 records out
134217728 bytes (134 MB) copied, 34.5297 s, 3.9 MB/s
real 0m34.532s
user 0m0.020s
sys 0m11.988s
Testing READ I/O performance of the data volume device
32768+0 records in
32768+0 records out
134217728 bytes (134 MB) copied, 10.8309 s, 12.4 MB/s
real 0m10.833s
user 0m0.000s
sys 0m0.432s
######################################################################
*************************************************
Prerequisite check summary for Node: ip-10-0-0-171
*************************************************
* Memory check: PASS
* Free space in data volume check: PASS
* CPU check: PASS
* DIO check: PASS
*********************************************
I/O perf test summary for Node: ip-10-0-0-171
*********************************************
* Read throughput: 12.4 MB/s
* Write throughput: 3.9 MB/s
######################################################################
Creating dashDB directories and dashDB instance
mv: cannot stat `/tmp/bashrc_db2inst1': No such file or directory
mv: cannot stat `/tmp/bash_profile_db2inst1': No such file or directory
Starting few of the key services ...
Starting named: [ OK ]
Starting saslauthd: [ OK ]
Starting sendmail: [ OK ]
Setting dsserver Config
mv: cannot stat `/tmp/dswebserver.properties': No such file or directory
Setting openldap
/bin/sh: /tmp/ldap-directories.sh: No such file or directory
cp: cannot stat `/tmp/cn=config.ldif': No such file or directory
mv: cannot stat `/tmp/olcDatabase0bdb.ldif': No such file or directory
cp: cannot stat `/tmp/slapd-sha2.so': No such file or directory
mv: cannot stat `/tmp/cn=module0.ldif': No such file or directory
ln: creating hard link `/var/run/slapd.pid': File exists [ OK ]
Starting sssd: [ OK ]
Starting system logger: [ OK ]
Starting nscd: [ OK ]
Update dsserver with ldap info
dashDB set configuration
Setting database configuration
database SSL configuration
-bludb_ssl_keystore_password
-bludb_ssl_certificate_label
UPDATED: /opt/ibm/dsserver/Config/dswebserver.properties
set dashDB Encryption
dashDB failed to stop on ip-10-0-0-171 because database services didn't stop.
Retry the operation. If the same failure occurs, contact IBM Service.
If a command prompt is not visible on your screen, you need to detach from the container by typing Ctrl-C.
Thank you for testing dashDB Local.
MPP is only supported on Linux.
SMP on Mac is only supported using Kitematic with Docker Toolbox v1.11.1b and using the 'v1.0.0-kitematic' tag image, not 'latest'.
To help you further I'd like to focus on a single environment and for simplicity let's start with SMP on Linux and we can later discuss MPP.
Check the minimum requirements for an SMP installation:
Processor 2.0 GHz core
Memory 8 GB RAM
Storage 20 GB
What is the Linux flavor you use? Check with:
cat /etc/*-release
Make sure you have at least a Linux kernel 3.10. You can check with:
$ uname -r
3.10.0-229.el7.x86_64
Then let's find out what version of docker is installed with:
$ docker --version
Docker version 1.12.1, build 23cf638
Additionally you need to configure a hostname and domain name. You can verify that you have these with:
$ hostname
and
$ dnsdomainname
Also ensure all the required ports are open, the list is long. Check our documentation.
Is this system virtual or physical?
Can you show the entire output of as well as all above checks:
$ docker logs -–follow dashDB
Try the following steps which if all else is correct may help resolve this issue. Once you see the error:
$ docker exec -it dashDB stop
$ docker stop dashDB
$ docker start dashDB

Resources