Docker- How to Dockerize existing tomcat based Application with Directory Structure - docker

I am new to docker, I am trying to dockerize existing tomcat based application with below structure.Directory strucutre
ProductHomeDir
|
|
|----Tomcat
| |
| |---webapp
| |
| |---feature.war
| |---support.war
|
|
|----versionDir
|
|----ConfDir
|
|----log4j.properties
is it possible to create docker image for above structure?
Thanks

you can copy all the ProductHomeDir into the docker container using COPY command in dockerfile https://docs.docker.com/engine/reference/builder/#copy
then use CMD in dockerfile to execute scripts inside that container

Related

How can I connect (ssh) to docker container from other computer inside my lab network?

I'm trying to use the resources from other computers using the python3-mpi4py since my research uses a lot of calculations.
My codes and data are on the docker container.
To use mpi I have to be able to ssh directly to the docker container from other computers inside the same network as the host computer is located. But I cannot ssh into it.
my image is like below
|Host | <- On the same network -> | Other Computers |
| port 10000 | | |
| ^ | | |
|-------|-----------| | |
| V | | |
| port 10000 | | |
|docker container <-|------------ ssh ------------|--> |
Can anyone teach me how to do this?
You can running ssh server in the Host computer, then you can ssh to Host, then use docker command such as docker exec -i -t containerName /bin/bash to get interactive shell.
example:
# 1. On Other Computers
ssh root#host_ip
>> enter into Host ssh shell
# 2. On Host ssh shell
docker exec -i -t containerName /bin/bash
>> enter into docker interactive shell
# 3. On docker interactive shell
do sth.

Use multiple dockers with same name

I am working on building a high availability setup using keepalived, where each server will have its own set of dockers that will get handled appropriately depending on if it is in BACKUP or MASTER. However, for testing purposes, I don't have 2 boxes available that I can turn on and off for this. So, is there is a good (preferably lightweight) way I can setup multiple dockers with the same name on the same machine?
Essentially, it would like it to look something like this:
Physical Server A
-----------------------------------------
| Virtual Server A |
| -------------------------------------- |
| | keepalived - htmld - accessd - mysql |
| -------------------------------------- |
| ^ |
| | |
| v |
| Virtual Server B |
| -------------------------------------- |
| | keepalived - htmld - accessd - mysql |
| -------------------------------------- |
-----------------------------------------
Thanks
You cannot have multiple containers with the exact same name, but you can use docker-compose file to have several directories and containers with same name (but with some differences that I explain below).
You can read more about it in Docker Docs regarding my below explanation.
Let us suppose yours:
Physical Server A
-----------------------------------------
| Virtual Server A |
| -------------------------------------- |
| | keepalived - htmld - accessd - mysql |
| -------------------------------------- |
| ^ |
| | |
| v |
| Virtual Server B |
| -------------------------------------- |
| | keepalived - htmld - accessd - mysql |
| -------------------------------------- |
-----------------------------------------
In your case, I would create two directories: vsb and vsb. Now let's go into these two directories.
We have these one file (at least, but you can have more per your requirement):
-----------------------------------------
| /home/vsa/docker-compose.yml |
| /home/vsa/keepalived/Dockerfile |
| /home/vsa/htmld/Dockerfile |
| /home/vsa/accessd/Dockerfile |
| /home/vsa/mysql/Dockerfile |
| -------------------------------------- |
| ^ |
| | |
| v |
| /home/vsb/docker-compose.yml |
| /home/vsb/keepalived/Dockerfile |
| /home/vsb/htmld/Dockerfile |
| /home/vsb/accessd/Dockerfile |
| /home/vsb/mysql/Dockerfile |
| -------------------------------------- |
-----------------------------------------
Note the file names exactly, as Dockerfile starts with capital D.
Let's watch docker-compose.yml:
version: '3.9'
services:
keepalived:
build: ./keepalived
restart: always
htmld:
build: ./htmld
restart: always
accessd:
build: ./accessd
restart: always
mysql:
build: ./mysql
restart: always
networks:
default:
external:
name: some_network
volumes:
some_name: {}
Let's dig into docker-compose.yml first:
Version part defines which version to use. Services part starts the services and containers you want to create and run.
I've used names like keepalived under services. You can use any name you want there, as it's your choice.
Under keepalived, the keyword build specifies in which path Dockerfile exists, so that as the path is called /home/vsa/keepalived, so we use . which means here and then it goes to keepalived directory, searching for Dockerfile (in docker-compose.yml for vsb, it searches for this file in /home/vsb/keepalived).
networks part specifies the external network these containers use, so that when all of our containers from docker-compose are running, then they're in the same docker network, so they can see and talk to each other. name part has the name some_network that you can choose any name you want that created before.
How to create a network called some_network is, if you're in Linux, you should run docker network create some_network before running docker-compose file.
volumes part specifies the name of volume of these services.
And here is an example in keepalived directory for a file called Dockerfile:
FROM ubuntu:latest # see [Dockerfile Docs][2] for more info
# after FROM command, you can use
# other available commands to reach
# your own goal
Now let's go to Dockerfile:
FROM command specifies which OS base to use. In this case, we want to use ubuntu for example, so that we create our image based on ubuntu.
There are other commands you can see them all in above link.
After having finished both Dockerfile and docker-compose.yml files with your own commands and keywords, you can run and create them by these commands:
docker-compose -f /home/vsa/docker-compose.yml up -d
docker-compose -f /home/vsb/docker-compose.yml up -d
Now we'll have eight containers calling these (docker automatically called them, otherwise you explicitly name them on your own):
vsa_keepalived
vsa_htmld
vsa_accessd
vsa_mysql
vsb_keepalived
vsb_htmld
vsb_accessd
vsb_mysql

Reliable way of getting the full container name inside a container running in Docker Swarm

Background: I have a setup where many different scalable services connect to their databases via a connection pool (per instance). These services run within a Docker Swarm.
In my current database setup, this ends up looking as follows (using PostgreSQL in this example):
PID | Database | User | Application | Client | ...
... | db1 | app | --standard JDBC driver string-- | x | ...
... | db1 | app | --standard JDBC driver string-- | y | ...
... | db1 | app | --standard JDBC driver string-- | y | ...
... | ... | app | ... | x | ...
... | ... | app | ... | x | ...
... | db2 | app | --standard JDBC driver string-- | y | ...
... | db2 | app | --standard JDBC driver string-- | y | ...
... | ... | app | ... | x | ...
... | ... | app | ... | x | ...
What I would like to do, effectively, is provide the current Docker Swarm container name, including scaling identifier, to the DBMS to be able to better monitor the database connections, i.e.:
PID | Database | User | Application | Client | ...
... | db1 | app | books-service.1 | x | ...
... | db1 | app | books-service.2 | y | ...
... | db1 | app | books-service.3 | y | ...
... | ... | app | ... | x | ...
... | ... | app | ... | x | ...
... | db2 | app | checkout-service.2 | y | ...
... | db2 | app | checkout-service.2 | y | ...
... | ... | app | ... | x | ...
... | ... | app | ... | x | ...
(obviously, setting the connection string is trivial - it's getting the information to set that is the issue)
Since my applications are managed by Docker Swarm (and sometime in the future, likely Kubernetes), I cannot manually set this value via environment (as I do not perform a docker run).
When running docker ps on a given Swarm node, I see the following output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
59cdbf724091 my-docker-registry.net/books-service:latest "/bin/sh -c '/usr/bi…" 7 minutes ago Up 7 minutes books-service.1.zabvo1jal0h2xya9qfftnrnej
0eeeee15e92a my-docker-registry.net/checkout-service:latest "/bin/sh -c 'exec /u…" 8 minutes ago Up 8 minutes checkout-service.2.189s7d09m0q86y7fdf3wpy0vc
Of note is the NAMES column, which includes an identifier for the actual instance of the given container (or image, however you'd prefer to look at it). Do note that this name is not the hostname of the container, which by default is the container ID.
I know there are ways to determine if an application is running inside Docker (e.g. using /proc/1/cgroup), but that doesn't help me either as those also only list the container ID.
Is there a way to get this value from inside a Docker container that is being run in a swarm?
What you need (books-service.1) is a combination of a swarm service name and a task slot. Both of these can be passed to the container as environment variables, as well as a full task name (books-service.1.zabvo1jal0h2xya9qfftnrnej):
version: "3.0"
services:
test:
image: debian:buster
command: cat /dev/stdout
environment:
SERVICE_NAME: '{{ .Service.Name }}'
TASK_SLOT: '{{ .Task.Slot }}'
TASK_NAME: "{{ .Task.Name }}"
After passing these you can either strip the TASK_NAME to get what you need or combine the SERVICE_NAME with the TASK_SLOT. Come to think of it, you can combine them right in the template:
version: "3.0"
services:
test:
image: debian:buster
command: cat /dev/stdout
environment:
MY_NAME: '{{ .Service.Name }}.{{.Task.Slot}}'
Other possible placeholders can be found here.

How to know a process is running under docker?

I may be asking a very beginner level question but I need a way to distinguish process under docker and that under non-docker in a box. The 'ps' command command output gives me a feeling that process is running in linux box and cannot confirm if same is under hood of docker.
In the same context is it possible / feasible that process under docker be started with docker root file system.
Is the same feasible or there any other solution for same?
You can identify Docker process via the process tree on the Docker host (or on the VM if using docker for mac/windows)
The parent process to 2924(haproxy) is 2902
The parent process to 2902(haproxy-start) is 2881
2881 will be docker-container which is managed by a dockerd process
To view your process listing in a tree format use ps -ejH or pstree (available in the psmisc package)
To get a quick list of whats running under dockerd
/ # pstree $(pgrep dockerd)
dockerd-+-docker-containe-+-docker-containe-+-java---17*[{java}]
| | `-8*[{docker-containe}]
| |-docker-containe-+-sinopia-+-4*[{V8 WorkerThread}]
| | | |-{node}
| | | `-4*[{sinopia}]
| | `-8*[{docker-containe}]
| |-docker-containe-+-node-+-4*[{V8 WorkerThread}]
| | | `-{node}
| | `-8*[{docker-containe}]
| |-docker-containe-+-tinydns
| | `-8*[{docker-containe}]
| |-docker-containe-+-dnscache
| | `-8*[{docker-containe}]
| |-docker-containe-+-apt-cacher-ng
| | `-8*[{docker-containe}]
| `-20*[{docker-containe}]
|-2*[docker-proxy---6*[{docker-proxy}]]
|-docker-proxy---5*[{docker-proxy}]
|-2*[docker-proxy---4*[{docker-proxy}]]
|-docker-proxy---8*[{docker-proxy}]
`-28*[{dockerd}]
Show the parents of a PID (-s)
/ # pstree -aps 3744
init,1
`-dockerd,1721 --pidfile=/run/docker.pid -H unix:///var/run/docker.sock --swarm-default-advertise-addr=eth0
`-docker-containe,1728 -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --shim docker-containerd-shim ...
`-docker-containe,3711 8d923b3235eb963b735fda847b745d5629904ccef1245d4592cc986b3b9b384a...
`-java,3744 -Dzookeeper.log.dir=. -Dzookeeper.root.logger=INFO,CONSOLE -cp/zookeeper/bin/../build/cl
|-{java},4174
|-{java},4175
|-{java},4176
|-{java},4177
|-{java},4190
|-{java},4208
|-{java},4209
|-{java},4327
|-{java},4328
|-{java},4329
|-{java},4330
|-{java},4390
|-{java},4416
|-{java},4617
|-{java},4625
|-{java},4629
`-{java},4632
Show all children of docker, including namespace changes (-S):
/ # pstree -apS $(pgrep dockerd)
dockerd,1721 --pidfile=/run/docker.pid -H unix:///var/run/docker.sock --swarm-default-advertise-addr=eth0
|-docker-containe,1728 -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --shim docker-containerd-shim ...
| |-docker-containe,3711 8d923b3235eb963b735fda847b745d5629904ccef1245d4592cc986b3b9b384a...
| | |-java,3744,ipc,mnt,net,pid,uts -Dzookeeper.log.dir=. -Dzookeeper.root.logger=INFO,CONSOLE -cp/zookeeper/bin/../build/cl
| | | |-{java},4174
| | | |-{java},4175
| | | |-{java},4629
| | | `-{java},4632
| | |-{docker-containe},3712
| | `-{docker-containe},4152
| |-docker-containe,3806 49125f8274242a5ae244ffbca121f354c620355186875617d43876bcde619732...
| | |-sinopia,3841,ipc,mnt,net,pid,uts
| | | |-{V8 WorkerThread},4063
| | | |-{V8 WorkerThread},4064
| | | |-{V8 WorkerThread},4065
| | | |-{V8 WorkerThread},4066
| | | |-{node},4062
| | | |-{sinopia},4333
| | | |-{sinopia},4334
| | | |-{sinopia},4335
| | | `-{sinopia},4336
| | |-{docker-containe},3814
| | `-{docker-containe},4038
| |-docker-containe,3846 2a756d94c52d934ba729927b0354014f11da6319eff4d35880a30e72e033c05d...
| | |-node,3910,ipc,mnt,net,pid,uts lib/dnsd.js
| | | |-{V8 WorkerThread},4204
| | | |-{V8 WorkerThread},4205
| | | |-{V8 WorkerThread},4206
| | | |-{V8 WorkerThread},4207
| | | `-{node},4203
The command lxc-ls and the command lxc-ps may be installable on your Linux distribution. This will allow you to list the running LXC containers and the processes running within those containers respectively. You should be able to link the output from lxc-ls to lxc-ps using streams and get a list of all containerized processes.
The big caveat is that you specified Docker and not every Docker instance is running on LXC nor is it necessarily a localhost process. Docker defines an API that can be called to list remote Docker instances, so this technique will not help with enumerating processes on remote machines as well.
In windows docker behave little bit different.
It's processes are not run as child of parent process, but running as separate process on the host.
They can be viewed by (for example), powershell, like
Get-Process powershell
For example, getting processes on the host when running microsoft/iis container will include additional powershell process (since ms/iis container runs powershell as a main executable process).

Docker on several computers

For a study I deployed on my computer a cloud architecture using Docker. (Nginx for the load balancing an some Apache servers to run a simple Php application.
I wanted to know if it was possible to use several computers to deploy my containers in order to increase the power available.
(I'm using a MacBook Pro with Yosemite. I've installed boot2docker with Virtual box)
Disclosure: I was a maintainer on Swarm Legacy and Swarm mode
Edit: This answer mentions Docker Swarm Legacy, the first version of Docker Swarm. Since then a new version called Swarm mode was directly included in the docker engine and behaves a bit differently in terms of topology and features even though the big ideas remain.
Yes you can deploy Docker on multiple machines and manage them altogether as a single pool of resources. There are several solutions that you can use to orchestrate your containers on multiple machines using docker.
You can use either Docker Swarm, Kubernetes, Mesos/Marathon or Fleet. (there might be others as this is a fast-moving area). There are also commercial solutions like Amazon ECS.
In the case of Swarm, it uses the Docker remote API to communicate with distant docker daemons and schedule containers according to the load or some extra constraints (other systems are similar with more or less features). Here is an example of a small Swarm deployments.
Docker CLI
+
|
|
| 4000 (or else)
| server
+--------v---------+
| |
+------------> Swarm Manager <------------+
| | | |
| +--------^---------+ |
| | |
| | |
| | |
| | |
| client | client | client
| 2376 | 2376 | 2376
| | |
+---------v-------+ +--------v--------+ +--------v--------+
| | | | | |
| Swarm Agent | | Swarm Agent | | Swarm Agent |
| Docker | | Docker | | Docker |
| Daemon | | Daemon | | Daemon |
| | | | | |
+-----------------+ +-----------------+ +-----------------+
Choosing one of those systems is basically a choice between:
Cluster deployment simplicity and maintenance
Flexibility of the scheduler
Completeness of the API
Support for running VMs
Higher abstraction for groups of containers: Pods
Networking model (Bridge/Host or Overlay or Flat network)
Compatibility with the Docker remote API
It depends mostly on the use case and what kinds of workload you are running. For more details on the differences between those systems, see this answer.
That sounds like clustering, which is what docker swarm does (see its github repo).
It turns a pool of Docker hosts into a single, virtual host.
See for example issue 247: How replication control and load balancing being taken care of?

Resources