How to connect to docker-machine via its ip adress (hyper-v) - docker

I have a backend that I want to run in a Docker container in order to connect to it from another computer or device.
I created an external virtual switch on a hyper-v machine and created a new virtual machine that is connected to this switch. By the command:
docker-machine create -d hyperv --hyperv-virtual-switch <NameOfVirtualSwitch> <nameOfNode>
I connect to this external virtual switch in the network settings
There are a set of commands that I use to run
docker container prune
docker image prune
docker build -t nestjsdocker:latest .
docker run -it -p 3001:3001 --name {here are id of image} nestjsdocker:latest
There is my Dockerfile
FROM node:10-alpine
WORKDIR /src/app
COPY . .
RUN npm install
EXPOSE 3001
CMD ["npm","start"]
When I type docker-machine ip {name of my vm} I get 10.10.0.242, but when I type in the browser like http://10.10.0.242:3001/ I get error 'This site can’t be reached'

Related

Question about needing host network in running pouchdb-server inside docker

I have this docker image that exposes port 5984 from a pouchdb-server running inside a docker container.
Here's what Dockerfile looks like:
From node:16-alpine
WORKDIR /pouchdb
RUN apk update
RUN npm install --location=global pouchdb-server
EXPOSE 5984
ENTRYPOINT ["pouchdb-server"]
CMD ["--port", "5984"]
Running the container using the default bridge network doesn't work:
docker run -d -v $(pwd)/pouchdb -p 5984:5984 pouchdb-server:v1
But upon running the container using the host docker network, it works like a charm.
docker run -d -v $(pwd)/pouchdb -p 5984:5984 --network host pouchdb-server:v1.
I understand that it removes the network isolation between docker and host network but this has the caveat of possible port conflicts.
My question is, is there any way to export make this work without using host network?

How to access to a docker container via SSH using IP address?

I'm using NVIDIA Docker in a Linux machine (Ubuntu 20.04). I've created a container named user1 using nvidia/cuda:11.0-base image as follows:
docker run --gpus all --name user1 -dit nvidia/cuda:11.0-base /bin/bash
And, here is what I see if I run docker ps -a:
admin#my_desktop:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a365362840de nvidia/cuda:11.0-base "/bin/bash" 3 seconds ago Up 2 seconds user1
I want to access to that container via ssh using its unique IP address from a totally different machine (other than my_desktop, which is the host). First of all, is it possible to grant each container a unique IP address? If so, how can I do it? Thanks in advance.
In case you want to access to your container with ssh from an external VM, you need to do the following
Install the ssh daemon for your container
Run the container and expose its ssh port
I would propose the following Dockerfile, which builds from nvidia/cuda:11.0-base and creates an image with the ssh daemon inside
Dockerfile
# Instruction for Dockerfile to create a new image on top of the base image (nvidia/cuda:11.0-base)
FROM nvidia/cuda:11.0-base
ARG root_password
RUN apt-get update || echo "OK" && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo "root:${root_password}" | chpasswd
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN sed -i 's/#PasswordAuthentication yes/PasswordAuthentication yes/' /etc/ssh/sshd_config
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Build the image from the Dockerfile
docker image build --build-arg root_password=password --tag nvidia/cuda:11.0-base-ssh .
Create the container
docker container run -d -P --name ssh nvidia/cuda:11.0-base-ssh
Run docker ps to see the container port
Finally, access the container
ssh -p 49157 root#<VM_IP>
EDIT: As David Maze correctly pointed out. You should be aware that the root password will be visible in the image history. Also this way overwrites the original container process.
This process, if it is to be adopted it needs to be modified in case you need it for production use. This serves as a starting point for someone who wishes to add ssh to his container.

How to access remote telnet server within a docker container?

I would like to access the Star Wars Ascii movie from the telnet "towel.blinkenlights.nl" within a Docker Container.
Given this Dockerfile based on nerdalert:
FROM alpine:latest
RUN apk add busybox-extras
ENTRYPOINT ["/usr/bin/telnet", "towel.blinkenlights.nl"]
With this Docker Build & Run Commands:
docker build . -t starwars
docker run --rm -i -P starwars
I receive the following error messages:
telnet: can't connect to remote host (213.136.8.188): Connection refused
I also tried this Run Command with the same Error:
docker run --rm --network host -P starwars
and change the Dockerfile Baseimage to bitnami/minideb:stretch with no success.
How should I change the Dockerfile or the Docker run Command to access a (this) remote telnet server?
Without the Docker Container on my Windows Host system - I can access the telnet server towel.blinkenlights.nl easily

Unable to Run the Fast API Application deployed in Docker Container

I have create a Docker Image based on the following DockerFile
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7
COPY . /usr/app/
EXPOSE 80
WORKDIR /usr/app/
RUN pip install -r requirements.txt
CMD ["uvicorn", "app_Testing_Praveen:app", "--host", "0.0.0.0", "--port", "80"]
following the documentation available at
https://fastapi.tiangolo.com/deployment/docker/
After running the command
docker run -p 80:80 image_name
My docker image is running but giving the address as 0.0.0.0:80
But I am not able to find the absolute link to open the application. I know, due to virtualization there will be different external IP address for docker.
I found that IP on my docker network interface as "docker subnet mask" but that value is also not opening the applicatiln on browser.
My docker version is Docker version 20.10.5, build 55c4c88 and I am running this on windows.
You reach your services inside Docker containers, via the IP of the host machine
So you either access your service by http://localhost:80 or, from another machine, with http://<docker_host_ip>:80.

docker container can't communicate to other container on the same network

I have two images that should communicate with each other. The first image is the web API, and the second image is the web itself.
Before I build and run the images, I create a new network called nat using the command:
docker network create nat
After this, I start to create my images which I called image-api that runs on port 8080 and image-web that run on port 8081.
Then, I run the image-api image using the command:
docker run -d -p 3000:8080 --network nat image-api
I mapped the container port 8080 to my host port 3000. I tried to access the localhost port 3000 in my browser and it's running without an error and giving me the response as it should be.
The problem here is, my second image, the image-web image. I try to run it using the command:
docker run -d -p 3001:8081 --network nat image-web
When I try to access localhost:3001 in my browser, it's running, but not giving the data from image-api container. When I try to logs the image-web container, it's giving me an error like:
Error: getaddrinfo ENOTFOUND image-api
Just in case, I try to access the image-api container from my image-web container using URL like this:
http://image-api/ping
Here's my image-api Dockerfile:
FROM node:14 as builder
WORKDIR /app
COPY package.json /app
RUN npm install
FROM node:14
EXPOSE 8080
CMD ["node", "server.js"]
WORKDIR /server
COPY --from=builder /app/node_modules /server/node_modules
COPY server.js .
And here's my image-web Dockerfile:
FROM node:14 as builder
WORKDIR /app
COPY package.json .
RUN npm install
FROM node:14
ENV IMAGE_URL=http://image-api/ping
EXPOSE 8081
CMD ["node", "app.js"]
WORKDIR /web
COPY --from=builder /app/node_modules /web/node_modules
COPY /src .
Just in case, I already tried to run both of them without docker, and they run as it should be in my local computer.
EDIT:
I've also tried to run the container using name like this:
docker run -d -p 3000:8080 --network nat --name imageapi image-api
And try to access it using:
http://imageapi/ping
But it's still giving me the same error
SOLUTIONS:
As being pointed out by #davidMaze on the answer. After running our container using --name tag. I can access my container using http://imageapi:8080/ping
You need to explicitly specify a docker run --name when you start the container. Since you can run multiple containers from the same image, Docker doesn't automatically assign a container name based on the image name; you need to set it yourself.
docker run -d -p 3000:8080 --network nat --name image-api image-api
docker run -d -p 3001:8081 --network nat --name image-web image-web
Once you set the container name, Docker provides an internal DNS service and the container will be reachable by that name from other containers on the same network.
Use bridge networks in the Docker documentation describes this further, though mostly in contrast to an obsolete networking mode.

Resources