I am finding that docker on my Ubuntu 18.04 host is not retaining files mounted into the container on the host.
Image: hashicorp/terraform
I'm using --mount to bind a directory into the container, the directory is where the terraform config files are stored. I then execute the container which executes terraform which then writes its state files and all the other things.
In 70% of all cases those files don't survive the container. I can see them being created on the host while the container is running, but when the container is done doing its thing the files disappear.
Is that a docker or a terraform issue?
Adding more info:
docker run --mount type=bind,source='/home/david/demo',target=/demo -w /demo -it hashicorp/terraform plan -out tfstate
terraform version
0.11.13
docker version
Client:
Version: 18.09.5
API version: 1.39
Go version: go1.10.8
Git commit: e8ff056
Built: Thu Apr 11 04:43:57 2019
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.5
API version: 1.39 (minimum version 1.12)
Go version: go1.10.8
Git commit: e8ff056
Built: Thu Apr 11 04:10:53 2019
OS/Arch: linux/amd64
Experimental: false
After reading all comments on your question I'm going to sum up how I've test your scenario and my outcome
My docker version:
Docker version 18.09.1, build 4c52b90
Terraform:
Terraform v0.11.13
+ provider.azurerm v1.24.0
I've created a folder which contains my main.tf file with the following configuration:
provider "azurerm" {
version = "=1.24.0"
subscription_id = "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
client_id = "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
client_secret = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX="
tenant_id = "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
}
resource "azurerm_resource_group" "rg" {
name = "testResourceGroup"
location = "westus"
}
I'm behind a proxy so I've executed this the fist time and because I'm connecting to azure I've used init to be able to download the plugin:
docker run --env HTTPS_PROXY="http://myproxyfqdn:port" --rm --mount type=bind,source='/Docker/NFS/terraform',target='/terraform' -w /terraform -it hashicorp/terraform:full init
After this execution the folder on my host refreshed creating .terraform folder with the plugin:
# ls -ltra
-rw-r--r-- 1 root root 759 Apr 23 09:00 main.tf
drwxr-xr-x 3 root root 4096 Apr 23 09:09 .terraform
then I executed the plan with -out parameter that created my plan file for later use:
# docker run --env HTTPS_PROXY="http://myproxyfqdn:port" --rm --mount type=bind,source='/Docker/NFS/terraform',target='/terraform' -w /terraform -it hashicorp/terraform:full plan -out testplan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
+ azurerm_resource_group.rg
id: <computed>
location: "westus"
name: "testResourceGroup"
tags.%: <computed>
Plan: 1 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
That created the plan file on my folder:
# ls -ltra
-rw-r--r-- 1 root root 759 Apr 23 09:00 main.tf
drwxr-xr-x 3 root root 4096 Apr 23 09:09 .terraform
-rw-r--r-- 1 root root 5291 Apr 23 09:11 testplan
And then applying the plan created terraform.tfstate:
# docker run --env HTTPS_PROXY="http://myproxyfqdn:port" --rm --mount type=bind,source='/Docker/NFS/terraform',target='/terraform' -w /terraform -it hashicorp/terraform:full apply testplan
azurerm_resource_group.rg: Creating...
location: "" => "westus"
name: "" => "testResourceGroup"
tags.%: "" => "<computed>"
azurerm_resource_group.rg: Creation complete after 2s (ID: /subscriptions/8d43a801-58b6-4dde-84cc-...c60e6/resourceGroups/testResourceGroup)
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Updates on host folder:
# ls -ltra
-rw-r--r-- 1 root root 759 Apr 23 09:00 main.tf
drwxr-xr-x 3 root root 4096 Apr 23 09:09 .terraform
-rw-r--r-- 1 root root 5291 Apr 23 09:11 testplan
-rw-r--r-- 1 root root 3748 Apr 23 09:11 terraform.tfstate
I did not had problems and every execution did update data on the host folder.
Related
This question relates to this repository with the most relevant Travis job here.
The repository is for static site built from Jupyter notebooks. The notebooks are converted using build/build.py which, for each post, builds a Docker image, starts a corresponding container with the post notebook directory mounted, and uses nbconvert to convert the notebook to Markdown. One step of nbconvert's conversion involves creating a supporting file directory. This fails on Travis due to a permission issue.
In attempting to debug this problem, I found that the ownership and permissions of the repo are the same on my local machine and Travis (with my username switched for travis) before running Docker. Despite this, inside the mounted volume of the Docker container, the ownerships are different:
Local:
drwxrwxr-x 3 jovyan 1000 4096 Dec 10 19:56 .
drwsrwsr-x 1 jovyan users 4096 Dec 3 21:51 ..
-rw-rw-r-- 1 jovyan 1000 105 Dec 7 09:57 Dockerfile
drwxr-xr-x 2 jovyan 1000 4096 Dec 10 12:09 .ipynb_checkpoints
-rw-r--r-- 1 jovyan 1000 154229 Dec 10 12:28 post.ipynb
Travis:
drwxrwxr-x 2 2000 2000 4096 Dec 10 19:58 .
drwsrwsr-x 1 jovyan users 4096 Nov 8 16:37 ..
-rw-rw-r-- 1 2000 2000 101 Dec 10 19:58 Dockerfile
-rw-rw-r-- 1 2000 2000 35271 Dec 10 19:58 post.ipynb
Both my local machine and Travis are running Ubuntu 20.04, have the same version of Docker, and all other tools come from Conda so should behave the same. I am struggling to understand where this difference in ownership is coming from.
Try running the docker again with this command, so the uid outside the container is propagated inside:
docker run -u `id -u`
alternative, as pointed by #anemyte:
docker run -u $(id -u)
This should involve the creation of the new files inside the docker to be owned by "jovyan".
If you are able to guess that mounting points will exist, you could also pre-create them so the ownership of the files inside is also correct:
docker run -v /path/on/host:/path/in/container ...
If you set the permissions of your local path (/path/on/host) as 777, that will also be propagated to the mounting point: no permission error will be thrown regardless of the user that docker uses to create those files.
After that, you'll be free to restore permissions, if needed.
I am trying to deploy jenkins to IBM Cloud Kubernetes Service using persistent volume. Jenkins container stucks at Beginning extraction from war file.
I have tried without persistence and it is deployed as expected.
persistence:
storageClass: ibmc-file-bronze
serviceAccount:
create: false
name: jenkins
annotations: {}
controller:
customInitContainers:
- name: "volume-mount-permission"
image: "busybox"
command: ["/bin/sh"]
args:
- -c
- >-
chgrp -R 1000 /var/jenkins_home &&
chown -R 0 /var/jenkins_home &&
chmod -R g+rwx /var/jenkins_home
volumeMounts:
- name: "jenkins-home"
mountPath: "/var/jenkins_home"
securityContext:
runAsUser: 0
serviceType: NodePort
This is my values.yaml file. I configured a custom init container for folder permissions. Without this, init container fails with permission issue. With volume-mount-permissions init container, all other containers terminate with success.
The permission of jenkins_home folder is below.
jenkins#jenkins-pv-0:/$ ls -al /var/jenkins_home/
total 44
drwxrwxr-x 6 nobody jenkins 4096 Nov 26 15:02 .
drwxr-xr-x 1 root root 4096 Nov 26 15:01 ..
drwxr-xr-x 3 jenkins jenkins 4096 Nov 26 14:50 .cache
drwxrwsrwx 2 root jenkins 4096 Nov 26 14:50 casc_configs
-rw-r--r-- 1 jenkins jenkins 3106 Nov 26 15:01 copy_reference_file.log
-rw-r--r-- 1 jenkins jenkins 8 Nov 26 14:50 jenkins.install.InstallUtil.lastExecVersion
-rw-r--r-- 1 jenkins jenkins 8 Nov 26 14:50 jenkins.install.UpgradeWizard.state
drwxr-xr-x 2 jenkins jenkins 16384 Nov 26 14:51 plugins
-rw-r--r-- 1 jenkins jenkins 78 Nov 26 14:50 plugins.txt
drwxr-xr-x 6 jenkins jenkins 4096 Nov 26 15:02 war
Logs of Jenkins container is as below:
2020-11-26T15:01:49.195430822Z Running from: /usr/share/jenkins/jenkins.war
2020-11-26T15:01:49.199519383Z webroot: EnvVars.masterEnvVars.get("JENKINS_HOME")
2020-11-26T15:01:49.404752124Z 2020-11-26 15:01:49.388+0000 [id=1] INFO org.eclipse.jetty.util.log.Log#initialized: Logging initialized #522ms to org.eclipse.jetty.util.log.JavaUtilLog
2020-11-26T15:01:49.585199893Z 2020-11-26 15:01:49.584+0000 [id=1] INFO winstone.Logger#logInternal: Beginning extraction from war file
I followed the installation guide from Jenkins official Kubernetes installation.
The solution was installing the IBM Cloud Block Storage plug-in.
On IBM Cloud Kubernetes Service, I think, Jenkins cannot be installed on file-storage
Even though I have successfully (?) removed all Docker images and containers, the folder /var/lib/docker/overlay2 still is vast (152 GB). Why? How do I reduce the used disk size?
I have tried to rename the folder (in preparation for a possible removal of the folder) but that caused subsequent pull requests to fail.
To me it appears pretty unbelievable that Docker would need this amount of vast disk space just for later being able to pull an image again. Please enlighten me what is wrong or why it has to be this way.
List of commands run which should show what I have tried and the current status:
$ docker image prune --force
Total reclaimed space: 0B
$ docker system prune --force
Total reclaimed space: 0B
$ docker image prune -a --force
Total reclaimed space: 0B
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ du -h --max-depth=1 /var/lib/docker/overlay2 | sort -rh | head -25
152G /var/lib/docker/overlay2
1.7G /var/lib/docker/overlay2/ys1nmeu2aewhduj0dfykrnw8m
1.7G /var/lib/docker/overlay2/ydqchhcaqokdokxzbh6htqa49
1.7G /var/lib/docker/overlay2/xmffou5nk3zkrldlfllopxcab
1.7G /var/lib/docker/overlay2/tjz58rjkote2c79veonb3s6qa
1.7G /var/lib/docker/overlay2/rlnr04hlcudgoh6ujobtsu2ck
1.7G /var/lib/docker/overlay2/r4ubwsmrorpr08k8o5rko9n98
1.7G /var/lib/docker/overlay2/q8x21c9enjhpitt365smkmn4e
1.7G /var/lib/docker/overlay2/ntr973uef37oweqlxr4kmaxps
1.7G /var/lib/docker/overlay2/mcyasqzo2gry5dvjxoao1opws
1.7G /var/lib/docker/overlay2/m2k4u58mob6e2db86qqu1e1f8
1.7G /var/lib/docker/overlay2/lizesless03kch8j7kpk89rcf
1.7G /var/lib/docker/overlay2/kmu7mjvsopr8o63onbsijb98j
1.7G /var/lib/docker/overlay2/khgjwqry5drdy0jbwf47gr2lb
1.7G /var/lib/docker/overlay2/gt70ur50vw3whq265vmpep7ay
1.7G /var/lib/docker/overlay2/c3tm1fcuekmdreowrfcso7nd4
1.7G /var/lib/docker/overlay2/7j93t64mt63arj6sewyyejwyo
1.7G /var/lib/docker/overlay2/3ftxvvg2xg02xuwcb3ut3dq89
1.7G /var/lib/docker/overlay2/0m3o3lw6b1ggs8m6z4uv6ueqf
1.4G /var/lib/docker/overlay2/r82rfxme096cq5pg1xz1z5arg
1.4G /var/lib/docker/overlay2/qric73hv1z3nx4k0zop3fvcm6
1.4G /var/lib/docker/overlay2/oyb0a5ab5h642y30s6hawj4r9
1.4G /var/lib/docker/overlay2/oqf9ltfoy36evnkuo8ga2uepl
1.4G /var/lib/docker/overlay2/ntuwvljxxzqs2oxhgg3enyo7x
1.4G /var/lib/docker/overlay2/l0oi2lxdrtg42hk2rznknqk0r
$ ls -l /var/lib/docker/overlay2
total 136
drwx------ 4 root root 72 Nov 20 13:03 00ep8i7v5bdmhqsxdoikslr19
drwx------ 4 root root 72 Feb 28 09:47 026x5e2xns6ui2acym19qfvl7
drwx------ 4 root root 72 Apr 2 19:20 032y8d31damevtfymq6yzkyi4
drwx------ 4 root root 72 Apr 23 13:42 03wwbyd4uge9u0auk94wwdlig
drwx------ 4 root root 72 Jan 15 12:46 04cy91a19owwqu9hyw6vruhzo
drwx------ 4 root root 72 Apr 2 14:44 051625a0f856b63ed67a3bc9c19f09fb1c90303b9536791dc88717cb7379ceeb
drwx------ 4 root root 72 Dec 3 19:56 059fk19uw70p6fqzei6wnj8s2
drwx------ 4 root root 72 Apr 21 15:03 059mddrhqegqhxv1ockejw9gs
drwx------ 4 root root 72 Nov 28 11:26 069dwkz92m8fao6whxnj4x9vp
drwx------ 4 root root 72 Feb 28 09:47 06h7qo5f70oyzaqgn1elbx5u8
drwx------ 4 root root 72 Dec 18 13:27 0756fd640036fa92499cfdcf4bcc3081d9ec16c25eebe5964d5e12d22beb9991
drwx------ 4 root root 72 Apr 20 11:32 09rk4gm6x2mcquc5cz0yvbawq
drwx------ 4 root root 72 Apr 2 19:55 09scfio3qvtewzgc5bdwgw4f6
drwx------ 4 root root 72 May 4 14:00 0ac2a09aa4a038981d37730e56dece4a3d28e80f261b68587c072b4012dc044a
drwx------ 4 root root 72 Feb 25 14:19 0c399f5c349ec61ac175525c52533b069a52028354c1055894466e9b5430fbc3
drwx------ 4 root root 72 May 4 14:00 0cac39b1382986a2d9a6690985792c04d03192337ea37ee06cb74f2f457b7bb7
drwx------ 4 root root 72 Mar 5 08:41 0czco1xx3148slgwf8imdrk33
drwx------ 4 root root 72 Apr 21 08:30 0gb2iqev9e7kr587l09u19eff
drwx------ 4 root root 72 Feb 20 18:03 0gknqh4pyg46uzi6asskbf8xk
drwx------ 4 root root 72 Jan 8 11:43 0gugiou3wqu53os4dageh77ty
drwx------ 4 root root 72 Jan 7 11:31 0i8fd5jet6ieajyl2uo1xj2ai
.
.
.
$ docker version
Client: Docker Engine - Community
Version: 19.03.8
API version: 1.40
Go version: go1.12.17
Git commit: afacb8b
Built: Wed Mar 11 01:27:04 2020
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.8
API version: 1.40 (minimum version 1.12)
Go version: go1.12.17
Git commit: afacb8b
Built: Wed Mar 11 01:25:42 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
You might have switched storage drivers somewhere along the way, so maybe docker is just cleaning out those drivers but leaving overlay2 as is (I still can't understand why would pulling images would fail).
Let's try this, run docker info and check what is your storage driver:
$ docker info
Containers: 0
Images: 0
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
<output truncated>
If it is not overlay2 (as appears above) try switching to it, and then prune docker images again and check if that cleaned up that folder.
Another possible solution is mentioned in this thread, people are commenting that clearing logs solves this problem, so try the following:
Remove all log files:
find /var/lib/docker/containers/ -type f -name "*.log" -delete
Restart docker daemon (or entire machine):
sudo systemctl restart docker
or
docker-compose down && docker-compose up -d
or
shutdown -r now
in preparation for a possible removal of the folder
If you are going to delete all data from the Docker directory anyway it is safe to:
Stop Docker Daemon
Remove the /var/lib/docker directory entirely
Restart Docker Daemon
Docker will then recreate any needed data directories.
You can also add:
"log-driver": "json-file",
"log-opts": {"max-size": "20m", "max-file": "3"},
to your /etc/docker/daemon.json to restrict log size and growth in the future or set the log-driver to "journald" to eliminate log files entirely.
Thanks for your input and suggestions!
I believe that I am still using overlay2 as storage driver:
$ docker info
Client:
Debug Mode: false
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 19.03.8
Storage Driver: overlay2
<output truncated>
I also cleared the logs and restarted the daemon and actually also the entire machine. The problem however remained.
In the end I solved it by stoping the deamon, removing the entire docker folder and restarting the deamon, as suggested above.
df -h
sudo systemctl stop docker
sudo mv /var/lib/docker /var/lib/docker_old
sudo systemctl start docker
sudo rm -rf /var/lib/docker_old
df -h
I fear however that this will not be a permanent solutions and that the problem will come back, but this will hopefully last another year. :)
Try to prune everything including volumes (different than the original poster's command):
$ docker system prune --volumes
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all volumes not used by at least one container
- all dangling images
- all dangling build cache
Are you sure you want to continue? [y/N]
That freed up a bunch of space for me and solved my issue. I think the build cache was one of the culprits for me.
Two things will fill up /var/lib/docker/overlay2:
Docker images: Clean those with docker image prune -a. Note that any images not currently associated with a container will be deleted which requires pulling or building the image again if you needed it.
Container Specific Changes: any write to the container filesystem that isn't going to another mount (like a volume) will cause a copy-on-write that is stored in the container specific layer. You can see these changes with docker diff on a container. Even a metadata change like file ownership, permissions, or a timestamp, can trigger this copy-on-write of the entire file.
Things that are not included in this directory:
Volumes: Named volumes will be stored in /var/lib/docker/volumes by default. You can still prune these with docker volume prune but make sure you have backed up any important data first. A better cleanup is to remove unused anonymous volumes with a command like:
docker volume ls -qf dangling=true | egrep '^[a-z0-9]{64}$' | \
xargs --no-run-if-empty docker volume rm
Container Logs: Container logs will be written to /var/lib/docker/containers. If these are taking up space, it's best to have docker automatically rotate those. See this answer for details on rotating logs.
I had the same problem, /var/lib/docker/overlay2 was using 17 GB even after removing every docker image: docker image rm ...
When you stop a container, it is not automatically removed unless you started it with the --rm flag (prune-containers). Container size can be seen using this command: docker container ls -a -s
I managed to reclaim the space taken by stopped container using this command: docker container prune.
Answer of #Nick is perhaps even better as it cleans every unused docker files:
docker system prune --volumes
I have a Tomcat 8 / MySQL application I want to run in a docker container. I run Ubuntu 16.04 today in test and production and wanted use the Ubuntu 16.04 "latest" as the base FROM to my docker file and add Tomcat 8 and MySQL from there.
I know I can get a Tomcat 8 docker file as my base from https://hub.docker.com/_/tomcat/ but I did not see an Ubuntu base OS for those and I wanted to stay consistent with Ubuntu. Also, it seemed odd to add MySQL to a Tomcat container.
I worked through this issue and am posting my findings in case it helps others with similar issues.
Short answer: Running multiple services (tomcat / mysql) in a single container is not recommended. Yes, there is supervisor.d, etc. But this is discouraged. There is also baseimage-docker if you are committed to multiple services in one container.
The remainder of this answer shows how I got it working it if you really are determined...
The Tomcat 8 distro version on Ubuntu 16.04 is unfortunately only configured to run as a service (described in detail below). Issues with running a service in a docker container are documented well in many posts across stack exchange (it is discouraged). I was able to get tomcat 8 working as a service by adding a "tail -f /var/log/tomcat8/catalina.out" to the end of the "service tomcat8 start" command and starting the container with the "--cap-add SYS_PTRACE" option.
CMD service tomcat8 start && tail -f /var/log/tomcat8/catalina.out
The recommended way to start tomcat8 is to use the commands in /usr/share/tomcat8/bin. However, the distro version's soft links are incorrect and the server fails to start.
Using the commands ./catalina.sh run or ./startup.sh both produce an error such as this:
SEVERE: Cannot find specified temporary folder at /usr/share/tomcat8/temp
WARNING: Unable to load server configuration from [/usr/share/tomcat8/conf/server.xml]
SEVERE: Cannot start server. Server instance is not configured.
The distro splits tomcat8 across /usr/share/tomcat8 and /var/lib/tomcat8 which separates the bin files (catalina.sh and startup.sh) from the config and logs soft links in /var/lib/tomcat8. This makes these commands fail.
Files in /usr/share/tomcat8:
root#85d5fe47b66a:/usr/share/tomcat8# ls -la
total 32
drwxr-xr-x 4 root root 4096 Mar 9 22:18 .
drwxr-xr-x 117 root root 4096 Mar 9 23:29 ..
drwxr-xr-x 2 root root 4096 Mar 9 22:18 bin
-rw-r--r-- 1 root root 39 Mar 31 2017 defaults.md5sum
-rw-r--r-- 1 root root 1929 Apr 10 2017 defaults.template
drwxr-xr-x 2 root root 4096 Mar 9 22:18 lib
-rw-r--r-- 1 root root 53 Mar 31 2017 logrotate.md5sum
-rw-r--r-- 1 root root 118 Apr 10 2017 logrotate.template
Files in /var/lib/tomcat8:
root#85d5fe47b66a:/var/lib/tomcat8# ls -la
total 16
drwxr-xr-x 4 root root 4096 Mar 9 22:18 .
drwxr-xr-x 41 root root 4096 Mar 9 23:29 ..
lrwxrwxrwx 1 root root 12 Sep 28 14:43 conf -> /etc/tomcat8
drwxr-xr-x 2 tomcat8 tomcat8 4096 Sep 28 14:42 lib
lrwxrwxrwx 1 root root 17 Sep 28 14:43 logs -> ../../log/tomcat8
drwxrwxr-x 3 tomcat8 tomcat8 4096 Mar 9 22:18 webapps
lrwxrwxrwx 1 root root 19 Sep 28 14:43 work -> ../../cache/tomcat8
Running ./version.sh reveals that both CATALINA_BASE and CATALINA_HOME are set to /usr/share/tomcat8
Using CATALINA_BASE: /usr/share/tomcat8
Using CATALINA_HOME: /usr/share/tomcat8
Using CATALINA_TMPDIR: /usr/share/tomcat8/temp
Using JRE_HOME: /usr
Using CLASSPATH: /usr/share/tomcat8/bin/bootstrap.jar:/usr/share/tomcat8/bin/tomcat-juli.jar
Server version: Apache Tomcat/8.0.32 (Ubuntu)
Server built: Sep 27 2017 21:23:18 UTC
Server number: 8.0.32.0
OS Name: Linux
OS Version: 4.4.0-116-generic
Architecture: amd64
JVM Version: 1.8.0_161-b12
JVM Vendor: Oracle Corporation
Setting CATALINA_BASE explicitly to /var/lib/tomcat8 inside catalina.sh solved the problem in using ./catalina.sh run to start tomcat. In the past, I have alternatively added the soft links to conf, logs and work under the /usr/share/tomcat8 directory so it could find those files and start up properly with the catalina.sh run command.
BTW, even thought the JRE_HOME is clearly wrong in the version.sh dump above, the service does start correctly (when I append the tail -f command as described earlier). It also starts using catalina.sh run when I manually add the correct CATALINA_BASE variable to catalina.sh. So I spent no time looking into why that listed out incorrectly.
In the end, I realized three things:
Running multiple services (tomcat / mysql) in a single container is not recommended. Yes, there is supervisor.d, etc. But this is discouraged. There is also baseimage-docker if you are committed to multiple services in one container.
Even running a single service in a container is not recommended but there are documented ways to make it work (which I did for tomcat8 by adding the && tail -f ... to the end of the CMD).
In Ubuntu 16.04 (did not test other distros), to make tomcat8 run as a command (not a service) you need to either:
a) grab the tar file for Tomcat 8 and install that, since it puts all of the files under one directory and therefore there is no soft link issue. Or, b) if you insist on using the distro tomcat8 from apt-get, b.1) you need to modify a version of catalina.sh by adding the CATALINA_BASE and copy it to the proper installation directory or b.2) add the soft links.
while running docker commands, I keep getting such error:
$ sudo docker search mattdm/fedora
2014/06/05 22:12:25 Error: Get https://index.docker.io/v1/search?q=mattdm%2Ffedora: x509: certificate signed by unknown authority
I'm using Fedora 20 x86_64 without any http proxy.
I searched with google, but failed to find any clue of this, and have no idea how to troubleshoot this error, could anyone give me some prompt on fixing this?
here is some additional info may help:
$ sudo docker version
Client version: 0.11.1
Client API version: 1.11
Go version (client): go1.2.1
Git commit (client): fb99f99/0.11.1
Server version: 0.11.1
Server API version: 1.11
Git commit (server): fb99f99/0.11.1
Go version (server): go1.2.1
$ curl https://index.docker.io/v1/search?q=mattdm/fedora
{"query": "mattdm/fedora", "num_results": 2, "results": [{"is_trusted": false, "is_official": false, "name": "mattdm/fedora", "star_count": 49, "description": "A basic Fedora image corresponding roughly to a minimal install, minus some things which don't make sense in a container. Use tag `f20` for Fedora 20 or `f19` for Fedora 19."}, {"is_trusted": false, "is_official": false, "name": "mattdm/fedora-small", "star_count": 8, "description": "A small Fedora image on which to build. Contains just enough that you'll be able to run `yum install` in your dockerfiles to create something useful. Use tag `f19` for Fedora 19."}]}
$ ls -l /etc/pki/tls/certs/
total 1500
lrwxrwxrwx. 1 root root 49 Feb 18 03:58 ca-bundle.crt -> /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
-rw-r--r--. 1 root root 713687 Jan 5 2013 ca-bundle.crt.rpmsave
lrwxrwxrwx. 1 root root 55 Feb 18 03:58 ca-bundle.trust.crt -> /etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt
-rw-r--r--. 1 root root 796502 Jan 5 2013 ca-bundle.trust.crt.rpmsave
-rw-r--r--. 1 root root 1338 Mar 14 12:13 ca-certificates.crt
-rw-------. 1 root root 1025 Sep 25 2012 localhost.crt
-rwxr-xr-x. 1 root root 610 Apr 8 08:36 make-dummy-cert
-rw-r--r--. 1 root root 2242 Apr 8 08:36 Makefile
-rwxr-xr-x. 1 root root 829 Apr 8 08:36 renew-dummy-cert
This was proved to be related to the CDN provider.
check here: https://github.com/dotcloud/docker/issues/6474