Currently I have 2 web applications (WAR) app1 and app2 running on Tomcat 7:
app1 on http://localhost:8080/app1
app2 on http://localhost:8080/app2
I have created two different images in docker for webapp app1 and app2.
Now I want to run both the images in docker so that I can access the application with the same host and ports: i.e.: localhost:8080/app1 and localhost:8080/app2
Is it possible?
Thanks in advance.
You may use docker-compose for this purpose. docker-compose can build multiple containers for you on the same 'host'. You may include other services in docker-compose as you need, let's say a database or redis service. Then run the docker-compose up command.
$ docker-compose -f 'myapps-docker-compose.yml' up -d
where -f is 'from file' and -d is detached.
Your Sample docker-compose file called myapps-docker-compose.yml
version: '2.1'
#
services:
redis:
image: 'redis:5.0.5'
# command: redis-server --requirepass redispass
app1:
image: app1
build:
context: .
dockerfile: Dockerfile_app1
app2:
image: app2
build:
context: .
dockerfile: Dockerfile_app2
Related
I am using docker-compose and my configuration file is simply:
version: '3.7'
volumes:
mongodb_data: {}
services:
mongodb:
image: mongo:4.4.3
restart: always
ports:
- "27017:27017"
volumes:
- mongodb_data:/data/db
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=super-secure-password
rocket:
build:
context: .
depends_on:
- mongodb
image: rocket:dev
dns:
- 1.1.1.1
- 8.8.8.8
volumes:
- .:/var/rocket
ports:
- "30301-30309:30300"
I start MongoDB with docker-compose up, and then in new terminal windows run two Node.js application each with all the source code in /var/rocket with:
# 1st Node.js application
docker-compose run --service-ports rocket
# 2nd Node.js application
docker-compose run --service-ports rocket
The problem is that the 2nd Node.js application service needs to communicate with the 1st Node.js application service on port 30300. I was able to get this working by referencing the 1st Node.js application by the id of the Docker container:
Connect to 1st Node.js application service on: tcp://myapp_myapp_run_837785c85abb:30300 from the 2nd Node.js application service.
Obviously this does not work long term as the container id changes every time I docker-compose up and down. Is there a standard way to do networking when you start multiple of the same container from docker-compose?
You can run the same image multiple times in the same docker-compose.yml file:
version: '3.7'
services:
mongodb: { ... }
rocket1:
build: .
depends_on:
- mongodb
ports:
- "30301:30300"
rocket2:
build: .
depends_on:
- mongodb
ports:
- "30302:30300"
As described in Networking in Compose, the containers can communicate using their respective service names and their "normal" port numbers, like rocket1:30300; any ports: are ignored for this. You shouldn't need to manually docker-compose run anything.
Well you could always give specific names to your two Node containers:
$ docker-compose run --name rocket1 --service-ports rocket
$ docker-compose run --name rocket2 --service-ports rocket
And then use:
tcp://rocket1:30300
I want to know how to share application folder between container to container.
I found out articles about "how to share folder between container and host" but i could not find "container to container".
I want to do edit the code for frontend application on backend so I need to share the folder. <- this is also my goal.
Any solution?
My config is like this
/
- docker-compose.yml
|
- backend application
|
_ Dockerfile
|
-Frontend application
|
- Dockerfile
And
docker-compose.yml is like this
version: '3'
services:
railsApp: #<- backend application
build: .
command: bundle exec rails s -p 5000 -b '0.0.0.0'
volumes:
- code_share:/var/web/railsApp
ports:
- "3000:3000"
reactApp: #<- frontend application
build: .
command: yarn start
volumes:
- code_share:/var/web/reactApp
ports:
- "3000:3000"
volumes:
code_share:
You are already mounting a named volume in both your frontend and backend now.
According to your configuration, both your application /var/web/railsApp and /var/web/reactApp will see the exact same content.
So whenever you write to /var/web/reactApp in your frontend application container, the changes will also be reflected in the backend /var/web/railsApp
To achieve what you want (having railsApp and reactApp under /var/web), try mounting a folder on host machine into both the container. (make sure your application is writing into respective /var/web folder correctly.
mkdir -p /var/web/railsApp /var/web/reactApp
then adjust your compose file:
version: '3'
services:
railsApp: #<- backend application
build: .
command: bundle exec rails s -p 5000 -b '0.0.0.0'
volumes:
- /var/web:/var/web
ports:
- "3000:3000"
reactApp: #<- frontend application
build: .
command: yarn start
volumes:
- /var/web:/var/web
ports:
- "3000:3000"
I'm trying to migrate working docker config files (Dockerfile and docker-compose.yml) so they deploy working local docker configuration to docker hub.
Tried multiple config file settings.
I have the following Dockerfile and, below, the docker-compose.yml that uses it. When I run "docker-compose up", I successfully get two containers running that can either be accessed independently or will talk to each other via the "db" and the database "container_name". So far so good.
What I cannot figure out is how to take this configuration (the files below) and modify them so I get the same behavior on docker hub. Being able to have working local containers is necessary for development, but others need to use these containers on docker hub so I need to deploy there.
--
Dockerfile:
FROM tomcat:8.0.20-jre8
COPY ./services.war /usr/local/tomcat/webapps/
--
docker-compose.yml:
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "8089:8080"
volumes:
- /Users/user/Library/apache-tomcat-9.0.7/conf/tomcat-users.xml:/usr/local/tomcat/conf/tomcat-users.xml
depends_on:
- db
db:
image: mysql:5.7
container_name: test-mysql-docker
ports:
- 3307:3306
volumes:
- ./ZipCodeLookup.sql:/docker-entrypoint-initdb.d/ZipCodeLookup.sql
environment:
MYSQL_ROOT_PASSWORD: "thepass"
Expect to see running containers on docker hub, but cannot see how these files need to be modified to get that. Thanks.
Add an image attribute.
app:
build:
context: .
dockerfile: Dockerfile
ports:
image: docker-hub-username/app
Replace "docker-hub-username" with your username. Then run docker-compose push app
I have a docker-compose.yml file which brings up several services (redis, db, mongo, app). I made a script to bring up Docker environment up, but forcing you to forward environment variable which will act as a subdomain for the app (which is a PHP web app).
So for the app container I have:
app:
build:
context: .
dockerfile: ./docker/app/Dockerfile
image: xxx:xxx
container_name:my-app-${ENV}
restart: always
depends_on:
...
Now what I would like is to be able to fire up several apps which all depend on already brought up containers (for example app1.com and app2.com using the same DB).
So I was trying to bring it up by using:
ENV=$1 VIRTUAL_HOST=$1.com docker-compose up -d --build app
(I am using nginx container to enable virtual hosts, and $1 comes from bash script). But what this does is just rebuilds already existing app container and adds a new name.
Can I run docker-compose while building completely new app container, leaving others intact if they already exist?
You can run docker-compose up -d [NEW APPNAME] to build/run a specific app in the compose file
try...
docker-compose build [appName]
docker-compose up --no-deps -d [appName]
The first command builds the new container with [appName]
The 2nd command stops, destroys, and recreates just the [appName] container.
The --no-deps flag prevents Compose from recreating any services which [appName] depends on
your apps have to have different names since compose is going to build, stop, create all containers with that name.
so your compose file should be similar to:
app:
build:
context: .
dockerfile: ./docker/app/Dockerfile
image: xxx:xxx
container_name:my-app-${ENV}
restart: always
depends_on: ...
app2:
build:
context: .
dockerfile: ./docker/app/Dockerfile
image: xxx:xxx
container_name:my-app2-${ENV}
restart: always
depends_on: ...
app3:
build:
context: .
dockerfile: ./docker/app/Dockerfile
image: xxx:xxx
container_name:my-app3-${ENV}
restart: always
depends_on: ...
the above is using the same Dockerfile for app containers.
to use different dockerfiles for each app just change path of dockerfile
I am trying to use Docker Compose (with Docker Machine on Windows) to launch a group of Docker containers.
My docker-compose.yml:
version: '2'
services:
postgres:
build: ./postgres
environment:
- POSTGRES_PASSWORD=mysecretpassword
frontend:
build: ./frontend
ports:
- "4567:4567"
depends_on:
- postgres
backend:
build: ./backend
ports:
- "5000:5000"
depends_on:
- postgres
docker-compose build runs successfully. When I run docker-compose start I get the following output:
Starting postgres ... done
Starting frontend ... done
Starting backend ... done
ERROR: No containers to start
I did confirm that the docker containers are not running. How do I get my containers to start?
The issue here is that you haven't actually created the containers. You will have to create these containers before running them. You could use the docker-compose up instead, that will create the containers and then start them.
Or you could run docker-compose create to create the containers and then run the docker-compose start to start them.
The reason why you saw the error is that docker-compose start and docker-compose restart assume that the containers already exist.
If you want to build and start containers, use
docker-compose up
If you only want to build the containers, use
docker-compose up --no-start
Afterwards, docker-compose {start,restart,stop} should work as expected.
There used to be a docker-compose create command, but it is now deprecated in favor of docker-compose up --no-start.