Running Tomcat inside a Docker container - Container Exited at startup - docker

I am learning Docker and trying to build a Dockerfile that will run a Tomcat using a docker-compose rather than docker.
The Dockerfile is as follows:
# Base the image on tomcat
FROM tomcat:7.0.82-jre7
WORKDIR /usr/local/tomcat
# Install updates & commands
RUN apt-get update && apt-get install -y vim
# Add some pre-set files
COPY tomcat-users.xml /usr/local/tomcat/conf
# Run the Tomcat on port 8080
EXPOSE 8080
# Start tomcat
# CMD ["bin/startup.sh", "run"]
The docker-compose.yml file is as follows:
version: '3'
services:
tomcat:
image: tomcat:7.0
build:
context: ./
dockerfile: Dockerfile
ports:
- 8888:8080
container_name: tomcat7
volumes:
- ./tomcat7:/usr/local/tomcat:rw
entrypoint: /bin/bash /usr/local/tomcat/bin/startup.sh
tty: true
The tomcat7 docker container starts but in exit mode.
Any idea how to make it run?

Related

How to configure docker-compose to run a specific Docker command

I was required to copy files from ubuntu container to host machine. So I found a way to do that using docker cp <containerId>:/file/path/within/container /host/path/target command. I able to use this command while I am not in the ubuntu conainer, i.e., when I am not docker exec mode.
My problem is I want to know how to run Docker commands from Dockerfile, so that on/before exiting from the ubuntu container, the docker command should run which will copy the content from container to host.
I mightn't be able to do it with CMD[ ] as it would be docker inside docker container.
Any help would be very appreciable.
Here is my docker-compose file
version: '3'
services:
ubuntu:
build:
context: .
dockerfile: Dockerfile
image: ubuntu
ports:
- 8091
volumes:
- ./dir:/dir
volumes:
dir:
external: false
Here is my Dockerfile
FROM ubuntu:latest
WORKDIR /dir
VOLUME /dir
RUN apt-get update
RUN apt-get install -y
EXPOSE 8000
COPY . .
CMD ["/bin/bash"]
Edit 1:
I tried implementing my Original Problem using bind mount this way. But this way too it is not syncing the folders
version: '3'
services:
ubuntu:
build:
context: .
dockerfile: Dockerfile
image: ubuntu
ports:
- 8091
volumes:
- type: bind
source: ./dir
target: /dir
volumes:
dir:
external: false

How to build docker file for a VueJs application

I am new in docker. I've built an application with VueJs2 that interacts with an external API. I would like to run the application on docker.
Here is my docker-compose.yml file
version: '3'
services:
ew_cp:
image: vuejs_img
container_name: ew_cp
build:
context: .
dockerfile: Dockerfile
volumes:
- '.:/app'
- '/app/node_modules'
ports:
- '8080:8080'
Here is my Dockerfile:
FROM node:14.17.0-alpine as develop-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
RUN yarn install
COPY . .
EXPOSE 8080
CMD ["node"]
Here is the building command I run to build my image an container.
docker-compose up -d
The image and container is building without error but when I run the container it stops immediately. So the container is not running.
Are the DockerFile and compose files set correctly?
First of all you run npm install and yarn install, which is doing the same thing, just using different package managers. Secondly you are using CMD ["node"] which does not start your vue application, so there is no job running and docker is shutting down.
For vue applicaton you normally want to build the app with static assets and then run a simple http server to serve the static content.
FROM node:lts-alpine
# install simple http server for serving static content
RUN npm install -g http-server
# make the 'app' folder the current working directory
WORKDIR /app
# copy 'package.json' to install dependencies
COPY package*.json ./
# install dependencies
RUN npm install
# copy files and folders to the current working directory (i.e. 'app' folder)
COPY . .
# build app for production with minification
RUN npm run build
EXPOSE 8080
CMD [ "http-server", "dist" ]
Your docker-compose file could be as simple as
version: "3.7"
services:
vue-app:
build:
context: .
dockerfile: Dockerfile
container_name: vue-app
restart: always
ports:
- "8080:8080"
networks:
- vue-network
networks:
vue-network:
driver: bridge
to run the service from docker-compose use command property in you docker-compose.yml.
services:
vue-app:
command: >
sh -c "yarn serve"
I'm not sure about the problem but by using command: tail -f /dev/null in your docker-compose file , it will keep up your container so you could track the error within it and find its problem. You could do that by running docker exec -it <CONTAINER-NAME> bash and track the error logs in your container.
version: '3'
services:
ew_cp:
image: vuejs_img
container_name: ew_cp
build:
context: .
dockerfile: Dockerfile
volumes:
- '.:/app'
- '/app/node_modules'
command: tail -f /dev/null
ports:
- '8080:8080'
In your Dockerfile you have to start your application e.g. npm run start or any other scripts that you are using for running your application in your package.json.

How do I make my VS Code dev/remote container port accessible to localhost?

I have a GraphQL application that run inside a container. If I run docker compose build followed by docker compose up I can connect to it via localhost:9999/graphql. Inside the dockerfile the port forwarding is 9999:80. When I run the docker container ls command I can see the ports are forewarded as expected.
I'd like to running this in a VS Code remote container. Selecting Open folder in remote container gives me the option of selecting either the dockerfile or the docker-compose file to build the container. I've tried both options and neither allows me to access the GraphQL playground from localhost. Running from docker-compose I can see that the ports appear to be forwarded in the same manner as if I ran docker compose up but I can't access the site.
Where am I going wrong?
Update: If I run docker compose up on the container that is built by vs code, I can connect to localhost and the graphql playground.
FROM docker.removed.local/node
MAINTAINER removed
WORKDIR /opt/app
COPY package.json /opt/app/package.json
COPY package-lock.json /opt/app/package-lock.json
COPY .npmrc /opt/app/.npmrc
RUN echo "nameserver 192.168.11.1" > /etc/resolv.conf && npm ci
RUN mkdir -p /opt/app/logs
# Setup a path for using local npm packages
RUN mkdir -p /opt/node_modules
ENV PATH /opt/node_modules/.bin:$PATH
COPY ./ /opt/app
EXPOSE 80
ENV NODE_PATH /opt:/opt/app:$NODE_PATH
ARG NODE_ENV
VOLUME ["/opt/app"]
CMD ["forever", "-o", "/opt/app/logs/logs.log", "-e", "/opt/app/logs/error.log", "-a", "server.js"]
version: '3.5'
services:
server:
build: .
container_name: removed-data-graph
command: nodemon --ignore 'public/*' --legacy-watch src/server.js
image: docker.removed.local/removed-data-graph:local
ports:
- "9999:80"
volumes:
- .:/opt/app
- /opt/app/node_modules/
#- ${LOCAL_PACKAGE_DIR}:/opt/node_modules
depends_on:
- redis
networks:
- company-network
environment:
- NODE_ENV=dev
redis:
container_name: redis
image: redis
networks:
- company-network
ports:
- "6379:6379"
networks:
company-network:
name: company-network

How to add docker run param to docker compose file?

I am able to run my application with the following command:
docker run --rm -p 4000:4000 myapp:latest python3.8 -m pipenv run flask run -h 0.0.0.0
I am trying to write a docker-compose file so that I can bringup the app using
docker-compose up. This is not working. How do "add" the docker run params to the docker-compose file?
version: '3'
services:
web:
build: .
ports:
- "4000:4000"
volumes:
- .:/code
You need to use command to specify this.
version: '3'
services:
web:
build: .
ports:
- '4000: 4000'
image: myapp:latest
command: 'python3.8 -m pipenv run flask run -h 0.0.0.0'
volumes:
- .:/code
You should use CMD in your Dockerfile to specify this. Since you'll want to specify this every time you run a container based on the image, there's no reason to want to specify it manually when you run the image.
CMD python3.8 -m pipenv run flask run -h 0.0.0.0
Within the context of a Docker container, it's typical to install packages into the "system" Python: it's already isolated from the host Python by virtue of being in a Docker container, and the setup to use a virtual environment is a little bit tricky. That gets rid of the need to run pipenv run.
FROM python:3.8
WORKDIR /code
COPY Pipfile Pipfile.lock .
RUN pipenv install --deploy --system
COPY . .
CMD flask run -h 0.0.0.0
Since the /code directory is already in your image, you can actually make your docker-compose.yml shorter by removing the unnecessary bind mount
version: '3'
services:
web:
build: .
ports:
- "4000:4000"
# no volumes:

Run Gatsby with docker compose

I am trying to run Gatsby with Docker Compose.
From what I understand the Gatsby site is running in my docker container.
I map port 8000 of the container to port 8000 on my localhost.
But when looking on localhost:8000 I am not getting my gatsby site.
I use the following Dockerfile to build the image with docker build -t nxtra/gatsby .:
FROM node:8.12.0-alpine
WORKDIR /project
COPY ./package.json /project/package.json
COPY ./.entrypoint/entrypoint.sh /entrypoint.sh
RUN apk update \
&& apk add bash \
&& chmod +x /entrypoint.sh \
&& npm set progress=false \
&& npm install -g yarn gatsby-cli
EXPOSE 8000
ENTRYPOINT [ "/entrypoint.sh" ]
entrypoints.sh contains:
#!/bin/bash
yarn install
gatsby develop
docker-compose.yml ran with docker-compose up
version: '3.7'
services:
gatsby:
image: nxtra/gatsby
ports:
- "8000:8000"
volumes:
- ./:/project
tty: true
docker ps shows that port 8000 is forwarded 0.0.0.0:8000->8000/tcp.
Inspecting my container with docker inspect --format='{{.Config.ExposedPorts}}' id confirms the exposure of the port -> map[8000/tcp:{}]
docker tops on the container shows the following processes are running in the container:
18465 root 0:00 {entrypoint.sh} /bin/bash /entrypoint.sh
18586 root 0:11 node /usr/local/bin/gatsby develop
18605 root 0:00 /usr/local/bin/node /project/node_modules/jest-worker/build/child.js
18637 root 0:00 /bin/bash
Dockerfile and docker-compose.yml are situated in the root of my Gatsby project.
My project is running correctly when I run it without docker gatsby develop.
What am I doing wrong to get the Gatsby site that runs in my container to be visible on localhost:8000?
My issue was that Gatsby was only listening to requests within the container, like this answer suggests. Make sure you've configured Gatsby for the host 0.0.0.0. Take this (somewhat hacky) setup as an example:
Dockerfile
FROM node:alpine
RUN npm install --global gatsby-cli
docker-compose.yml
version: "3.7"
services:
gatsby:
build:
context: .
dockerfile: Dockerfile
entrypoint: gatsby
volumes:
- .:/app
develop:
build:
context: .
dockerfile: Dockerfile
command: gatsby develop -H 0.0.0.0
ports:
- "8000:8000"
volumes:
- .:/app
You can run Gatsby commands from a container:
docker-compose run gatsby info
Or run the development server:
docker-compose up develop

Resources