Docker runs only on Port 80 - docker

I am unable to run my docker image on Port 4000, even though I can run it on Port 80. What am I doing wrong here?
FROM node:latest as build
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx:latest
COPY --from=build /usr/src/app/dist/admin /usr/share/nginx/html
EXPOSE 4200
I'm creating the image using the following command:
docker build --pull --rm -f "DockerFile" -t admin1:v1 "."
When I run it on port 80, I'm able to use it:
docker run --rm -d -p 4200:4200/tcp -p 80:80/tcp admin1:v1
However, when I run the following command, I'm unable to use it:
docker run --rm -d -p 4200:4200/tcp -p 4000:4000/tcp admin1:v1
I have researched similar questions online, but I haven't been able to fix the problem. Any suggestion will be greatly appreciated!

You need to map the docker container port to the docker host port.
Try the following Command
docker run --rm -d -p 4200:4200/tcp -p 4000:80/tcp admin1:v1
The following is the extract from the Docker Documentation
-p 8080:80 Map TCP port 80 in the container to port 8080 on the Docker host.
You can refer the link for further information.
Docker Documentation

Related

Getting hot reload/HRM working with Docker and Vue 3

Vue CLI version is ~5.0.0. Vue version is 3.
This is my docker file
FROM node:lts-alpine
WORKDIR /app
COPY package*.json .
RUN ["npm", "install"]
COPY . .
EXPOSE 8080
CMD ["npm", "run", "serve"]
I build the image (image name is web) and run it with
docker run -p 8080:8080 --rm -it -v ${pwd}:/app -v /app/node_module web
If I make changes to source code I see no changes in browser. Inside container, if I inspect files, I can see that they have been changed, but I see no changes in browser.
I've seen examples of people running Vue container the following way:
docker run -p 8080:8080 -e CHOKIDAR_USEPOLLING=true -e HOST=0.0.0.0 --rm -it -v ${pwd}:/app -v /app/node_module web
However above doesn't really fix the problem.
I think it has something to do with web sockets but I'm kind of lost on what to try.

docker container can't communicate to other container on the same network

I have two images that should communicate with each other. The first image is the web API, and the second image is the web itself.
Before I build and run the images, I create a new network called nat using the command:
docker network create nat
After this, I start to create my images which I called image-api that runs on port 8080 and image-web that run on port 8081.
Then, I run the image-api image using the command:
docker run -d -p 3000:8080 --network nat image-api
I mapped the container port 8080 to my host port 3000. I tried to access the localhost port 3000 in my browser and it's running without an error and giving me the response as it should be.
The problem here is, my second image, the image-web image. I try to run it using the command:
docker run -d -p 3001:8081 --network nat image-web
When I try to access localhost:3001 in my browser, it's running, but not giving the data from image-api container. When I try to logs the image-web container, it's giving me an error like:
Error: getaddrinfo ENOTFOUND image-api
Just in case, I try to access the image-api container from my image-web container using URL like this:
http://image-api/ping
Here's my image-api Dockerfile:
FROM node:14 as builder
WORKDIR /app
COPY package.json /app
RUN npm install
FROM node:14
EXPOSE 8080
CMD ["node", "server.js"]
WORKDIR /server
COPY --from=builder /app/node_modules /server/node_modules
COPY server.js .
And here's my image-web Dockerfile:
FROM node:14 as builder
WORKDIR /app
COPY package.json .
RUN npm install
FROM node:14
ENV IMAGE_URL=http://image-api/ping
EXPOSE 8081
CMD ["node", "app.js"]
WORKDIR /web
COPY --from=builder /app/node_modules /web/node_modules
COPY /src .
Just in case, I already tried to run both of them without docker, and they run as it should be in my local computer.
EDIT:
I've also tried to run the container using name like this:
docker run -d -p 3000:8080 --network nat --name imageapi image-api
And try to access it using:
http://imageapi/ping
But it's still giving me the same error
SOLUTIONS:
As being pointed out by #davidMaze on the answer. After running our container using --name tag. I can access my container using http://imageapi:8080/ping
You need to explicitly specify a docker run --name when you start the container. Since you can run multiple containers from the same image, Docker doesn't automatically assign a container name based on the image name; you need to set it yourself.
docker run -d -p 3000:8080 --network nat --name image-api image-api
docker run -d -p 3001:8081 --network nat --name image-web image-web
Once you set the container name, Docker provides an internal DNS service and the container will be reachable by that name from other containers on the same network.
Use bridge networks in the Docker documentation describes this further, though mostly in contrast to an obsolete networking mode.

I am able to run the container in docker but unable to view in the browser

I am new to Docker. Firstly, I have created Dockerfile with in source code location.
Here is my Dockerfile
FROM nginx:latest
RUN mkdir /app
COPY . /app
EXPOSE 8000
lately, build an image using: docker build -t mywebapp:v1 .
and i have run the container using following command:
docker run -d -p 8000:8000 mywebapp:v1
problem is : container is running using port 8000, but unable to view in the browser
http://192.168.13.135:8000
please help me out in this problem inorder to view in the browser.

Google Compute Engine Container Port Closed

I added a firewall rule to open port 8080. If I click the SSH button in the GCE console, and run on the host shell:
nc -l -p 8080 127.0.0.1
I can detect the opened port. If I then go to the container's shell with:
docker run --rm -i -t <image> /bin/sh
and run the same netcat command, I can't detect the open port.
I went down this troubleshooting route because I couldn't connect to a node:alpine container running the ws npm for a demo websocket server. Here is my dockerfile:
# specify the node base image with your desired version node:<version>
FROM node:alpine
# replace this with your application's default port
EXPOSE 8080
WORKDIR /app
RUN apk --update add git
docker run --rm -i -t -p 8080:8080 <image> /bin/sh
per Google Compute Engine Container Port Closed

Run microsoft/nanoserver in a docker file

Link to microsoft/nanoserver
If I follow the process in the link above I can get docker nano-server to run inside of docker on the command line..
RUN --name nanoiis -d -it -p 8080:80 nanoserver/iis
is the commend line is use.
I want to put this into a docker file and build and image. So here is my dockerfile
FROM microsoft/nanoserver
# Set the working directory to /app
WORKDIR /app
# Copy the Public directory contents into the container at /app
ADD ./Public /app
# -p 8080:80 Map TCP port 80 in the container to port 8080 on the Docker host.
RUN --name nanoiis -d -it -p 8080:80 nanoserver/iis
I get an error
Error response from daemon: Dockerfile parse error line 12: Unknown
flag: name
My question is what am I doing wrong?
I am following the example on building a docker image here:
https://docs.docker.com/get-started/part2/
Next question is what is the command I would use to get my app running? In the example they use.
CMD ["python", "app.py"]
What should I use to get nano-server running?
Last point me to some documentation to get a web site running into the Nano-Server. It seems that Nano-Server has changed it role within Microsoft.

Resources