Anyone knows the repository of "Traceroute For Linux"? - traceroute

TRACEROUTE(8) Traceroute For Linux TRACEROUTE(8)
NAME
traceroute - print the route packets trace to network host
SYNOPSIS
traceroute [-46dFITUnreAV] [-f first_ttl] [-g gate,...]
[-i device] [-m max_ttl] [-p port] [-s src_addr]
[-q nqueries] [-N squeries] [-t tos]
[-l flow_label] [-w waittime] [-z sendwait]
[-UL] [-P proto] [--sport=port] [-M method] [-O mod_options]
[--mtu] [--back]
host [packet_len]
traceroute6 [options]
I want to see its source ,but can't find it anywhere...
Anyone knows the git/cvs/svn repository of "Traceroute For Linux"?

Here you go:
http://packages.ubuntu.com/source/natty/traceroute

Related

How to add initial chunk to informix DB spaces?

I have a newly created Informix database instance.
I have following DB spaces.
**
RootDBS, temptbs, logdbs, physdbs
**
I have four chunks. I need to assign them to above DB spaces initially. What is the way to do that? Is there any related documentation about this? Please mention the documentation.
You can use the onspaces command to add further chunks to an existing dbspace or to create new dbspaces. Documentation for this can be found in the Knowledge Center at https://www.ibm.com/support/knowledgecenter/SSGU8G_12.1.0/com.ibm.admin.doc/ids_admin_0561.htm - for example in the sections "Adding a chunk to a dbspace or blobspace" and "Creating a dbspace that uses the default page size."
This is the documentation for Informix version 12.10 but the command syntax is the same in earlier releases.
The initial chunk of the root dbspace, rootdbs, is specified in the $ONCONFIG file (which is located in $INFORMIXDIR/etc; see onconfig file for documentation on the format) before you initialize the server with oninit.
ROOTNAME rootdbs
ROOTPATH /opt/informix/dev/osiris_19.rootdbs.c0
ROOTOFFSET 0
ROOTSIZE 1500000
The other dbspaces have to be created separately with onspaces after you've brought the basic server online.
Usage:
onspaces { -a <spacename> -p <path> -o <offset> -s <size> [-m <path> <offset>]
{ { [-Mo <mdoffset>] [-Ms <mdsize>] } | -U }
} |
{ -c { -d <DBspace> [-k <pagesize>] [-t]
-p <path> -o <offset> -s <size> [-m <path> <offset>] } |
{ -d <DBspace> [-k <pagesize>]
-p <path> -o <offset> -s <size> [-m <path> <offset>]
[-ef <first_extent_size>] [-en <next_extent_size>] } |
{ -P <PLOGspace>
-p <path> -o <offset> -s <size> [-m <path> <offset>] } |
{ -b <BLOBspace> -g <pagesize>
-p <path> -o <offset> -s <size> [-m <path> <offset>] } |
{ -S <SBLOBspace> [-t]
-p <path> -o <offset> -s <size> [-m <path> <offset>]
[-Mo <mdoffset>] [-Ms <mdsize>] [-Df <default-list>] } |
{ -x <Extspace> -l <Location> } } |
{ -d <spacename> [-p <path> -o <offset>] [-f] [-y] } |
{ -f[y] off [<DBspace-list>] | on [<DBspace-list>] } |
{ -m <spacename> {-p <path> -o <offset> -m <path> <offset> [-y] |
-f <filename>} } |
{ -r <spacename> [-y] } |
{ -s <spacename> -p <path> -o <offset> {-O | -D} [-y] } |
{ -ch <sbspacename> -Df <default-list> } |
{ -cl <sbspacename> } |
{ -ren <spacename> -n <newname> }
-a - Add a chunk to a DBspace, BLOBspace or SBLOBspace
-c - Create a DBspace, PLOGspace, BLOBspace, SBLOBspace, or Extspace
-d - Drop an empty DBspace, PLOGspace, BLOBspace, SBLOBspace, Extspace,
or chunk
-f - Change dataskip default for specified DBspaces
-m - Add mirroring to an existing DBspace, PLOGspace, BLOBspace or
SBLOBspace
-r - Turn mirroring off for a DBspace, PLOGspace, BLOBspace or SBLOBspace
-s - Change the status of a chunk
-ch - Change default list for smart large object space
-cl - garbage collect smart large objects that are not referenced
default-list = {[LOGGING = {ON|OFF}] [,ACCESSTIME = {ON|OFF}]
[,AVG_LO_SIZE = {1 - 2097152}] }
-ren - Rename a DBspace, BLOBspace, SBLOBspace or Extspace
The logdbs and physdbs dbspaces are presumably for the logical logs and the physical log. Those would be created as normal dbspaces, and then you'd move the logs to those spaces with onparams:
Usage: onparams { -a -d <DBspace> [-s <size>] [-i] } |
{ -b -g <pagesize> [-n <number of buffers>]
[-r <number of LRUs>] [-x <maxdirty>] [-m <mindirty>] } |
{ -d -l <log file number> [-y] } |
{ -p -s <size> [-d <DBspace>] [-y] }
-a - Add a logical log file
-b - Add a buffer pool
-i - Insert after current log
-d - Drop a logical log file
-p - Change physical log size and location
-y - Automatically responds "yes" to all prompts
The temptbs is presumably a temporary dbspace, which you'll end up listing in your $ONCONFIG file too (as DBSPACETEMP)
You might end up with (dumb) blob spaces and smart blob spaces too, and you'll probably end up with a temporary sbspace (smart blob space) specified in $ONCONFIG too (as SBSPACETEMP).
You can use the onmode utility to set (some but not all) configuration parameters while the server is running with the -wf option, for example. You could set entries such as SBSPACETEMP like that.

IBM Cloud Private Docker logged in as root rather than ubuntu

When I run the docker command on the ICP tutorial:
docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception:2.1.0.3 install
I receive a error that I am logged in as root instead of the ubuntu user. What may be causing this and how can it be fixed?
PLAY [Checking Python interpreter] *********************************************
TASK [Checking Python interpreter] *********************************************
changed: [10.2.7.26]
PLAY [Checking prerequisites] **************************************************
TASK [Gathering Facts] *********************************************************
[WARNING]: sftp transfer mechanism failed on [10.2.7.26]. Use ANSIBLE_DEBUG=1
to see detailed information
[WARNING]: scp transfer mechanism failed on [10.2.7.26]. Use ANSIBLE_DEBUG=1
to see detailed information
fatal: [10.2.7.26]: FAILED! => {"changed": false, "module_stderr": "Connection to 10.2.7.26 closed.\r\n", "module_stdout": "Please login as the user \"ubuntu\" rather than the user \"root\".\r\n\r\n", "msg": "MODULE FAILURE", "rc": 0}
NO MORE HOSTS LEFT *************************************************************
NO MORE HOSTS LEFT *************************************************************
PLAY RECAP *********************************************************************
10.2.7.26 : ok=1 changed=1 unreachable=0 failed=1
Edit:
The error from the verbose message:
<10.2.7.26> ESTABLISH SSH CONNECTION FOR USER: root
<10.2.7.26> SSH: EXEC ssh -C -o CheckHostIP=no -o LogLevel=ERROR -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o 'IdentityFile="/installer/cluster/ssh_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=60 10.2.7.26 'dd of=Please login as the user "ubuntu" rather than the user "root"./setup.py bs=65536'
<10.2.7.26> (0, 'Please login as the user "ubuntu" rather than the user "root".\n\n', '')
However, this error occurs when I use my private key generated from my cloud provider. When I follow the SSH key generator here: https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0.3/installing/ssh_keys.html
I get this error:
<10.2.7.26> ESTABLISH SSH CONNECTION FOR USER: root
<10.2.7.26> SSH: EXEC ssh -C -o CheckHostIP=no -o LogLevel=ERROR -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o 'IdentityFile="/installer/cluster/ssh_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=60 -tt 10.2.7.26 'ls /usr/bin/python &>/dev/null || (echo "Can'"'"'t find Python interpreter(/usr/bin/python) on your node" && exit 1)'
<10.2.7.26> (255, '', 'Permission denied (publickey).\r\n')
fatal: [10.2.7.26]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: Permission denied >(publickey).\r\n",
"unreachable": true
}
The hosts:
[master]
10.2.7.26
[worker]
10.2.7.26
[proxy]
10.2.7.26
The Config.yaml:
network_type: calico
kubelet_extra_args: ["--fail-swap-on=false"]
cluster_domain: cluster.local
etcd_extra_args: ["--grpc-keepalive-timeout=0", "--grpc-keepalive-interval=0",
"--snapshot-count=10000"]
default_admin_user: admin
default_admin_password: admin
disabled_management_services: ["istio", "vulnerability-advisor", "custom-metrics-adapter"]
image-security-enforcement:
clusterImagePolicy:
- name: "docker.io/ibmcom/*"
policy:
For ICP installation, it requires root user permission. Could you try to install ICP by below command?
sudo docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception:2.1.0.3 install
More information, you can access below link for details.
https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0.3/installing/install_containers_CE.html

Elasticsearch docker image with data persistence

I am having an issue with data persistence on my Elasticsearch docker image on my linux AWS EC2 machine.
I am launching the container like so:
docker run -d --name elasticsearch -p 9200:9200 -p 9300:9300 \
-v $PWD/elasticsearch/data:/usr/share/elasticsearch/data \
-e "discovery.type=single-node" \
docker.elastic.co/elasticsearch/elasticsearch-oss:6.2.4
The issue is with the -v $PWD/elasticsearch/data:/usr/share/elasticsearch/data line. On Mac everything works correctly and I can persist my data after bringing down the container, but on the linux machine I get permission errors on the /usr/share/elasticsearch/data directory in the container.
Error (line 3 is the critical part):
[2018-07-06T00:39:35,479][INFO ][o.e.n.Node ] [] initializing ...
[2018-07-06T00:39:35,503][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: failed to obtain node locks, tried [[/usr/share/elasticsearch/data/docker-cluster]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:125) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:112) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) ~[elasticsearch-cli-6.2.4.jar:6.2.4]
at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:85) ~[elasticsearch-6.2.4.jar:6.2.4]
Caused by: java.lang.IllegalStateException: failed to obtain node locks, tried [[/usr/share/elasticsearch/data/docker-cluster]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:244) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.node.Node.<init>(Node.java:264) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.node.Node.<init>(Node.java:246) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:213) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:213) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:323) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:121) ~[elasticsearch-6.2.4.jar:6.2.4]
... 6 more
Caused by: java.io.IOException: failed to obtain lock on /usr/share/elasticsearch/data/nodes/0
at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:223) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.node.Node.<init>(Node.java:264) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.node.Node.<init>(Node.java:246) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:213) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:213) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:323) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:121) ~[elasticsearch-6.2.4.jar:6.2.4]
... 6 more
Caused by: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data/nodes/0/node.lock
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84) ~[?:?]
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[?:?]
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) ~[?:?]
at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177) ~[?:?]
at java.nio.channels.FileChannel.open(FileChannel.java:287) ~[?:1.8.0_161]
at java.nio.channels.FileChannel.open(FileChannel.java:335) ~[?:1.8.0_161]
at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-7.2.1.jar:7.2.1 b2b6438b37073bee1fca40374e85bf91aa457c0b - ubuntu - 2018-01-10 00:48:43]
at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:209) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.node.Node.<init>(Node.java:264) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.node.Node.<init>(Node.java:246) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:213) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:213) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:323) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:121) ~[elasticsearch-6.2.4.jar:6.2.4]
... 6 more
What do I need to add to make this work on linux?
This will work.
Set permission to:
sudo mkdir -p $PWD/elasticsearch/data
sudo chmod 777 -R $PWD/elasticsearch/data
Then:
docker run -d --name elasticsearch -p 9200:9200 -p 9300:9300 \
-v $PWD/elasticsearch/data:/usr/share/elasticsearch/data \
-e "discovery.type=single-node" \
--name elasticsearch \
docker.elastic.co/elasticsearch/elasticsearch-oss:6.2.4
I found a solution by investigating the ownership of the folders from inside and outside the docker container. It starting working by running the command sudo chown -R 1000:root $PWD/elasticsearch/data before launching the container so that, in the container, docker thought it owned the directory.
Why does the same docker run create two different results on two different machines? Isn't the point of docker to be one size fits all?
This will work for now but I would like a better solution because I'm not sure if mine will always work as I don't know if 1000 will be the UID for docker every time.
In you docker-compose.yml, you can specify the user with user: $USER, like this:
elasticsearch:
user: $USER
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.5
volumes:
- /srv/graylog/es_data:/usr/share/elasticsearch/data
This is also solves the problem, and you don't need to run chown command.
Why this error happens?
The Elasticsearch can't be started using root user, it's probably an extra security the developer team gave us. Therefore, the docker image for Elasticsearch checks if the current container user is root, if it is, it changes the user to elasticsearch:elasticsearch or 1000:1000.
Possible solution
You can change the directory owner to 1000:1000. It will work. But what if you can't change the owner? I had this problem recently, I was trying to map a NFS directory to docker and couldn't change the owner in NFS.
Final solution
the docker-entrypoint.sh located at /usr/local/bin/docker-entrypoint.sh is the responsible for checking and changing root user to elasticsearch. It can be overriden with a simple custom image like the following:
FROM docker.elastic.co/elasticsearch/elasticsearch:7.11.0
COPY ./docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
Below there's an example of a modified docker-entrypoint.sh currently working for Elasticsearch 7.11.0. You can always get the docker-entrypoint.sh by starting a test container and using something like
docker cp <elasticsearch-container>:/usr/local/bin/docker-entrypoint.sh ./docker-entrypoint.sh
Modified docker-entrypoint.sh
#!/bin/bash
set -e
# Files created by Elasticsearch should always be group writable too
umask 0002
run_as_other_user_if_needed() {
if [[ "$(id -u)" == "0" ]]; then
# If running as root, drop to specified UID and run command
exec chroot --userspec=<uid>:<gid> / "${#}"
else
# Either we are running in Openshift with random uid and are a member of the root group
# or with a custom --user
exec "${#}"
fi
}
# Allow user specify custom CMD, maybe bin/elasticsearch itself
# for example to directly specify `-E` style parameters for elasticsearch on k8s
# or simply to run /bin/bash to check the image
if [[ "$1" != "eswrapper" ]]; then
if [[ "$(id -u)" == "0" && $(basename "$1") == "elasticsearch" ]]; then
# centos:7 chroot doesn't have the `--skip-chdir` option and
# changes our CWD.
# Rewrite CMD args to replace $1 with `elasticsearch` explicitly,
# so that we are backwards compatible with the docs
# from the previous Elasticsearch versions<6
# and configuration option D:
# https://www.elastic.co/guide/en/elasticsearch/reference/5.6/docker.html#_d_override_the_image_8217_s_default_ulink_url_https_docs_docker_com_engine_reference_run_cmd_default_command_or_options_cmd_ulink
# Without this, user could specify `elasticsearch -E x.y=z` but
# `bin/elasticsearch -E x.y=z` would not work.
set -- "elasticsearch" "${#:2}"
# Use chroot to switch to UID 1000 / GID 0
exec chroot --userspec=<uid>:<gid> / "$#"
else
# User probably wants to run something else, like /bin/bash, with another uid forced (Openshift?)
exec "$#"
fi
fi
# Allow environment variables to be set by creating a file with the
# contents, and setting an environment variable with the suffix _FILE to
# point to it. This can be used to provide secrets to a container, without
# the values being specified explicitly when running the container.
#
# This is also sourced in elasticsearch-env, and is only needed here
# as well because we use ELASTIC_PASSWORD below. Sourcing this script
# is idempotent.
source /usr/share/elasticsearch/bin/elasticsearch-env-from-file
if [[ -f bin/elasticsearch-users ]]; then
# Check for the ELASTIC_PASSWORD environment variable to set the
# bootstrap password for Security.
#
# This is only required for the first node in a cluster with Security
# enabled, but we have no way of knowing which node we are yet. We'll just
# honor the variable if it's present.
if [[ -n "$ELASTIC_PASSWORD" ]]; then
[[ -f /usr/share/elasticsearch/config/elasticsearch.keystore ]] || (run_as_other_user_if_needed elasticsearch-keystore create)
if ! (run_as_other_user_if_needed elasticsearch-keystore has-passwd --silent) ; then
# keystore is unencrypted
if ! (run_as_other_user_if_needed elasticsearch-keystore list | grep -q '^bootstrap.password$'); then
(run_as_other_user_if_needed echo "$ELASTIC_PASSWORD" | elasticsearch-keystore add -x 'bootstrap.password')
fi
else
# keystore requires password
if ! (run_as_other_user_if_needed echo "$KEYSTORE_PASSWORD" \
| elasticsearch-keystore list | grep -q '^bootstrap.password$') ; then
COMMANDS="$(printf "%s\n%s" "$KEYSTORE_PASSWORD" "$ELASTIC_PASSWORD")"
(run_as_other_user_if_needed echo "$COMMANDS" | elasticsearch-keystore add -x 'bootstrap.password')
fi
fi
fi
fi
if [[ "$(id -u)" == "0" ]]; then
# If requested and running as root, mutate the ownership of bind-mounts
if [[ -n "$TAKE_FILE_OWNERSHIP" ]]; then
chown -R <uid>:<gid> /usr/share/elasticsearch/{data,logs}
fi
fi
if [[ -n "$ES_LOG_STYLE" ]]; then
case "$ES_LOG_STYLE" in
console)
# This is the default. Nothing to do.
;;
file)
# Overwrite the default config with the stack config
mv /usr/share/elasticsearch/config/log4j2.file.properties /usr/share/elasticsearch/config/log4j2.properties
;;
*)
echo "ERROR: ES_LOG_STYLE set to [$ES_LOG_STYLE]. Expected [console] or [file]" >&2
exit 1 ;;
esac
fi
# Signal forwarding and child reaping is handled by `tini`, which is the
# actual entrypoint of the container
run_as_other_user_if_needed /usr/share/elasticsearch/bin/elasticsearch <<<"$KEYSTORE_PASSWORD"
notice that the comments in the code are the original ones, it may be confusing they saying the code will change the user to elasticsearch.
This solved it for me:
sudo chown 1000:1000 -R $PWD/elasticsearch/data
The reason is that ES now runs with user 1000 and needs the directory to have permissions for user 1000
In case somebody land here with the same problem as me.
java.lang.IllegalStateException: failed to obtain node locks, tried
[[/usr/share/elasticsearch/data]] with lock id [0]; maybe these locations
are not writable or multiple nodes were started without increasing
[node.max_local_storage_nodes] (was [1])?
And created 2+ containers with docker-compose.yml, check that the volumes are different for each container. It was that in my case.

Docker - sh service script dont take options

i've this docker sh service script in /etc/init.d on a debain 8 machine:
#!/bin/sh
set -e
### BEGIN INIT INFO
# Provides: docker
# Required-Start: $syslog $remote_fs
# Required-Stop: $syslog $remote_fs
# Should-Start: cgroupfs-mount cgroup-lite
# Should-Stop: cgroupfs-mount cgroup-lite
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Create lightweight, portable, self-sufficient containers.
# Description:
# Docker is an open-source project to easily create lightweight, portable,
# self-sufficient containers from any application. The same container that a
# developer builds and tests on a laptop can run at scale, in production, on
# VMs, bare metal, OpenStack clusters, public clouds and more.
### END INIT INFO
export PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin
BASE=docker
# modify these in /etc/default/$BASE (/etc/default/docker)
DOCKER=/usr/bin/$BASE
# This is the pid file managed by docker itself
DOCKER_PIDFILE=/var/run/$BASE.pid
# This is the pid file created/managed by start-stop-daemon
DOCKER_SSD_PIDFILE=/var/run/$BASE-ssd.pid
DOCKER_LOGFILE=/var/log/$BASE.log
DOCKER_DESC="Docker"
DOCKER_OPTS="--insecure-registry 127.0.0.1:9000"
# Get lsb functions
. /lib/lsb/init-functions
if [ -f /etc/default/$BASE ]; then
. /etc/default/$BASE
fi
# Check docker is present
if [ ! -x $DOCKER ]; then
log_failure_msg "$DOCKER not present or not executable"
exit 1
fi
check_init() {
# see also init_is_upstart in /lib/lsb/init-functions (which isn't available in Ubuntu 12.04, or we'd use it directly)
if [ -x /sbin/initctl ] && /sbin/initctl version 2>/dev/null | grep -q upstart; then
log_failure_msg "$DOCKER_DESC is managed via upstart, try using service $BASE $1"
exit 1
fi
}
fail_unless_root() {
if [ "$(id -u)" != '0' ]; then
log_failure_msg "$DOCKER_DESC must be run as root"
exit 1
fi
}
cgroupfs_mount() {
# see also https://github.com/tianon/cgroupfs-mount/blob/master/cgroupfs-mount
if grep -v '^#' /etc/fstab | grep -q cgroup \
|| [ ! -e /proc/cgroups ] \
|| [ ! -d /sys/fs/cgroup ]; then
return
fi
if ! mountpoint -q /sys/fs/cgroup; then
mount -t tmpfs -o uid=0,gid=0,mode=0755 cgroup /sys/fs/cgroup
fi
(
cd /sys/fs/cgroup
for sys in $(awk '!/^#/ { if ($4 == 1) print $1 }' /proc/cgroups); do
mkdir -p $sys
if ! mountpoint -q $sys; then
if ! mount -n -t cgroup -o $sys cgroup $sys; then
rmdir $sys || true
fi
fi
done
)
}
echo -n "$1"
case "$1" in
start)
echo "start"
check_init
fail_unless_root
cgroupfs_mount
touch "$DOCKER_LOGFILE"
chgrp docker "$DOCKER_LOGFILE"
ulimit -n 1048576
if [ "$BASH" ]; then
ulimit -u 1048576
else
ulimit -p 1048576
fi
echo $DOCKER_OPTS
log_begin_msg "Starting $DOCKER_DESC: $BASE"
start-stop-daemon --start --background \
--no-close \
--exec "$DOCKER" \
--pidfile "$DOCKER_SSD_PIDFILE" \
--make-pidfile \
-- \
daemon -p "$DOCKER_PIDFILE" \
$DOCKER_OPTS \
>> "$DOCKER_LOGFILE" 2>&1
log_end_msg $?
;;
stop)
check_init
fail_unless_root
log_begin_msg "Stopping $DOCKER_DESC: $BASE"
start-stop-daemon --stop --pidfile "$DOCKER_SSD_PIDFILE" --retry 10
log_end_msg $?
;;
restart)
check_init
fail_unless_root
docker_pid=`cat "$DOCKER_SSD_PIDFILE" 2>/dev/null`
[ -n "$docker_pid" ] \
&& ps -p $docker_pid > /dev/null 2>&1 \
&& $0 stop
$0 start
;;
force-reload)
check_init
fail_unless_root
$0 restart
;;
status)
echo "Prova"
check_init
status_of_proc -p "$DOCKER_SSD_PIDFILE" "$DOCKER" "$DOCKER_DESC"
echo "a"
;;
statu)
echo "Prova"
check_init
status_of_proc -p "$DOCKER_SSD_PIDFILE" "$DOCKER" "$DOCKER_DESC"
echo "a"
;;
*)
echo "Usage: service docker {start|stop|restart|status} PIPPO"
exit 1
;;
esac
the problem with this is that it doesn't pass the DOCKER_OPTS value to the start-stop-daemon section in start section (or at least they don't work and don't appear in the command of the resulting process in ps aux output).
We've tried to put the DOCKER_OPTS directly in the script as well as let it been read on the default config file but the result is the same, the options doesn't make any effect.
If we try to launch the process with start-stop-daemon directly form terminal with the same options it work just fine.
What can be the reason of this ?
On a side note, we also try to play with the script a bit and found this strage situation:
we made a copy of the status section called statu, then we call booth and they give us different results.
status :
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled)
Active: active (running) since Thu 2016-02-25 12:40:43 CET; 4min 35s ago
Docs: https://docs.docker.com
statu:
[FAIL[....] Docker is not running ... failed!
Being the code the same this is quite a surprise !? Why is it so ?
Systemd doesn't use the scripts in /etc/init.d. From your output, it's using the package default configuration in /lib/systemd/system/docker.service. If you want to make changes:
# make a local copy, /lib/systemd can be overwritten on an application upgrade
cp /lib/systemd/system/docker.service /etc/systemd/system/docker.service
# edit /etc/systemd/system/docker.service
systemctl daemon-reload # updates systemd from files on the disk
systemctl restart docker # restart the service with your changes

Why can't I run a simple ping from the IBM Liberty Docker image

I am learning about IBM Containers and Docker. I created a 2-line Docker file to test it out:
FROM registry-ice.ng.bluemix.net/ibmliberty
CMD ["ping","google.com"]
Unfortunately, when I run a container from this image, it gives the following output:
> docker run liberty-ping
Usage: ping [-aAbBdDfhLnOqrRUvV] [-c count] [-i interval] [-I interface]
[-m mark] [-M pmtudisc_option] [-l preload] [-p pattern] [-Q tos]
[-s packetsize] [-S sndbuf] [-t ttl] [-T timestamp_option]
[-w deadline] [-W timeout] [hop1 ...] destination
When I changed the FROM line to FROM ubuntu:trusty, the ping executed flawlessly. What is going on?
Thanks to the friendly comments, I found the websphere-liberty Dockerfile does include an Entrypoint: https://github.com/WASdev/ci.docker/blob/master/websphere-liberty/8.5.5/developer/Dockerfile#L57
Reading the Dockerfile reference at http://docs.docker.com/reference/builder/#cmd I understood that there was an option that executes the parameters based on an ENTRYPOINT, but I didn't realize that the other two flavors of the CMD syntax wouldn't work when an ENTRYPOINT was set.
I changed the dockerfile to this:
FROM registry-ice.ng.bluemix.net/ibmliberty
ENTRYPOINT ["/bin/ping"]
CMD ["google.com"]
Now it works!

Resources