Docker beginner here.
I created a simple asp.net web application , which on run shows me the default page of application.
Using the docker build command, I create a image out of it and further using the docker run command docker run -d --name {containername} -p 81:8080 {imageid}. Now when I try to access the container image over local host on browser i.e. http://localhost:81/, I am getting 'The site cannot be reached' error. I expected the same default page of application to open over the exposed port 81.
My docker client is windows/amd and docker server is linux/amd. The docker version I am using is 19.03.08
Using docker inspect I could see
"PortBindings": {
"8080/tcp": [
{
"HostIp": "",
"HostPort": "81"
}
]
},
and "IPAddress": "" in networksettings.
docker ps and docker ps -a
I would appreciate any help or suggestion.
From the screen shots attached, it seems your container is killed as soon as its started. You should have a process running in the container to keep it up & running. Only then will you be able to access via the host ip:port
In this case http://localhost:81
In your docker ps -a the status is exited. Ideally it should be something like this if your container is up & running.
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4c01db0b339c ubuntu:12.04 bash 17 seconds ago Up 16 seconds
Related
I know that this question has been done on the web, but I am not able to find a suitable solution for me.
I have one server (VM1) with sshfs installed that should provide remote file system storage. I have another server (VM2) where containers are run, I would like that these containers use volumes hosted in VM1.
I followed this official docker guide
So in VM1 I ran:
docker plugin install vieux/sshfs DEBUG=1 sshkey.source=/home/debian/.ssh/
In VM 2 I ran:
docker volume create --name remotevolume -d vieux/sshfs -o sshcmd=debian#vm1:/home/debian/sshfs300 -o IdentityFile=/home/csicari/data/Mega/lavoro/keys/vm-csicari.key -o -o allow_other -o nonempty
This is the inspect output:
[
{
"CreatedAt": "0001-01-01T00:00:00Z",
"Driver": "vieux/sshfs:latest",
"Labels": {},
"Mountpoint": "/mnt/volumes/895d7f7679f69131500c786d7fe5fdc1",
"Name": "remotevolume",
"Options": {
"IdentityFile": "/home/csicari/data/Mega/lavoro/keys/vm-csicari.key",
"sshcmd": "debian#vm1:/home/debian/sshfs300"
},
"Scope": "local"
}
]
In VM1 I ran also:
docker run -it -v remotevolume:/home -d ubuntu
But I got this error:
docker: Error response from daemon: VolumeDriver.Mount: sshfs command execute failed: exit status 1 (read: Connection reset by peer
). See 'docker run --help'.
Maybe this is a long back asked question maybe it will help other newbies. The remote VM /etc/ssh/sshd_config file content check the property PasswordAuthentication yes
If it is 'yes' we can use with password passing parameter. otherwise, change it to 'no' and restart the ssh or sshd service.
service ssh restart
service ssh status
And also PasswordAuthentication. Depending on your PAM configuration, so check that as well.
If it is AWS instance reset the password using the command passwd ubuntu # Here ubuntu is the default user in ec2 ubuntu instance
I try to run a Shiny app on a remote server (here DigitalOcean) using Docker.
First, I created a package for my app as a .tar.gz file. Then:
Create the following Dockerfile:
FROM thinkr/rfull
COPY myapp_*.tar.gz /myapp.tar.gz
RUN R -e "install.packages('myapp.tar.gz', repos = NULL, type = 'source')"
COPY Rprofile.site /usr/local/lib/R/etc
EXPOSE 3838
CMD ["R", "-e myapp::run()"]
Create the following Rprofile.site
local({
options(shiny.port = 3838, shiny.host = "0.0.0.0")
})
Then I build the image using
docker build -t myapp .
I push the image to DockerHub using
docker tag myapp myrepo/myapp:latest
docker push myrepo/myapp
I connect to my droplet on DigitalOcean
eval $(docker-machine env mydroplet)
I create a container from my image on Dockerhub
docker run -d -p 3838:3838 myrepo/myapp
So far it seems to work fine. No message error and I got expected messages when I run docker logs mycontainer
The problem is that I do not know how to actually access the running container. When I connect to the droplet IP, I got nothing (This site can’t be reached). If use
docker inspect --format '{{ .NetworkSettings.IPAddress }}' mycontainer
I got an IP, it seems to be a local one ('172.17.0.2').
When I run docker ps here is what I got
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6d4195XXXX myrepo/myapp "R '-e myapp::ru…" 10 days ago Up 10 days 0.0.0.0:3838->3838/tcp, 8787/tcp determined_brown
So the question is: how can I run my dockerized shiny app on my droplet IP address?
Check if you have added the firewall rule to allow connections to 3838 port.
https://www.digitalocean.com/docs/networking/firewalls/resources/troubleshooting/
First, you need to publish the port, which is what you already do.
Second, you need to access the IP address of the host machine where there port is published.
The easiest way is probably to check the output of docker-machine env mydroplet and use this IP, together with your published port.
I've read the command for docker kill. Now exactly how to stop
all containers or kill the container?
Should I navigate to the Docker folder in program files in cmd, or should I navigate to botium folder which I created for botium box in cmd? Currently I have Docker desktop version.
I'm getting the below error:
I restarted the Docker desktop app
Cmd : navigated to botium folder which I created for botium box
entered : docker-compose -f docker-compose-all.yml up
Error was thrown
C:\Users\Ram\Documents\Botium>docker-compose -f
docker-compose-all.yml up Starting botium_redis_1 ... botium_mysql_1
is up-to-date Starting botium_prisma_1 ... error
ERROR: for botium_prisma_1 Cannot start service prisma: driver failed
programming external connectivity on endpoint botStarting
botium_redis_1 ... error lready allocated
ERROR: for botium_redis_1 Cannot start service redis: driver failed
programming external connectivity on endpoint botium_redis_1
(023c3f7d0101a509a677a2f5434b00f25a8e4d3e238166eae6e0c1678b81035b):
Bind for 0.0.0.0:6379 failed: port is already allocated
ERROR: for prisma Cannot start service prisma: driver failed
programming external connectivity on endpoint botium_prisma_1
(1ad423ca349cd5d987a082407c64c8300e2822a0e4c3bf6a63c4369705f1413a):
Bind for 0.0.0.0:4466 failed: port is already allocated
ERROR: for redis Cannot start service redis: driver failed
programming external connectivity on endpoint botium_redis_1
(023c3f7d0101a509a677a2f5434b00f25a8e4d3e238166eae6e0c1678b81035b):
Bind for 0.0.0.0:6379 failed: port is already allocated ERROR:
Encountered errors while bringing up the project.
However when I retried the http://127.0.0.1:4000/quickstart a couple of times
the botium box opened. But initially this was not opening.
You don't have to navigate.
If you run using docker-compose, you can go to the directory where your docker-compose.yml file is located and run docker-compose down.
Without docker-compose, you have to run docker ps to list all currently running containers and find the name of the container to kill. You can use the CONTAINER ID or the NAMES. Then run docker kill <container name>.
Example:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
myId myimage:2.5 "/opt/command/ba…" 24 hours ago Up About an hour 0.0.0.0:9000->9000/tcp very_cool_name_1
$ docker kill very_cool_name_1
very_cool_name_1
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$
Just type below commands when you open your Powershell or Bash.
To stop all running containers:
docker stop $(docker ps -q)
To remove all containers:
docker rm $(docker ps -qa)
Please note that rm will just remove your container not the Docker image. If your want to delete an image then you can use: docker rmi -f container_id
I hosted our application inside a docker container. When I run docker ps command, it gave info like below.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6405daf98246 rdarukumalli/testapp-master "/bin/bash" 4 hours ago Up 4 hours 0.0.0.0:32797->443/tcp, 0.0.0.0:32796->8000/tcp, 0.0.0.0:32795->8080/tcp, 0.0.0.0:32794->8443/tcp, 0.0.0.0:32793->9997/tcp insane_poincare
I am trying to access this application from my machine with the following URLs. Nothing worked so far.
0.0.0.0:32795/testapp/login.jsp
0.0.0.0:8080/testapp/login.jsp
localhost:8080/testapp/login.jsp
localhost:32795/testapp/login.jsp
However, if i give the command "curl http://localhost:8080/testapp/login.jsp" inside bash of docker container,
I can login page html is coming.
Can some one help me in understanding these URL mappings and what URL i need to use to access this login page outside docker container?
Try curl http://localhost:32795/testapp/login.jsp.
Your docker ps shows that container's port 8080 is bound to external port 32795 : [...] 0.0.0.0:32795->8080/tcp [...]
docker ps command shows the running container which displays the port your application is running. On browser type http://localhost:54529/your_application_page. if you are running application locally, then this is the port which needs to be changed in browser to access container application.
Playing with ELK and docker, I needed to restart every services.
docker ps told me that I haven't any containers up.
docker run -it --rm [...] --name es elasticsearch -> Error response from daemon. The name "es" is already use by container [...]
So I try to remove all container :
docker ps -a -q | xargs docker rm -> Cannot connect to the Docker daemon. Is the docker daemon running on this host?
The container is not up but still here.
Of course I can simply change my container's name but it's not right. That mean I have container running. Even if I restart my server.
Any idea ?
When you stop your container it's not getting removed by default, unless you're providing --rm flag. So, it could be so, like you have started and stopped some container with es name before and it's stopped now. But it's not possible to create a new container with the existing name, even if the existing one is not running. Try to use a -a flag to show all containers you have as:
docker ps -a
If you have some with the name es, just remove it manually with:
docker rm es
You also able to provide -f flag, to force removing the es container even if it's running.
docker rm es should do the trick. Furthermore, if you want to remove a running container, you can add the -f parameter(docker rm -f 'container_name')