Newer to Docker and trying to understand how images work. I ran the following command:
sudo docker search hello-world
and it returned this:
docker.io docker.io/carinamarina/hello-world-app This is a sample Python web application,
I then ran:
sudo docker run docker.io/carinamarina/hello-world-app
...and this was the output from the terminal:
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
I don't understand. How can the IP address be 0.0.0.0? I entered that into a browser and got nothing. I tried localhost:5000 and got nothing.
How does one get to see this webapp run?
tl;dr
you need to publish the port to the host network to see the application working
long version:
well good for you to start working with docker
I will start with explaining a little bit about docker then I will explain what is happing over there
First of all, there is a difference between "image" and "container"
Image is the blueprint that is used to create containers from
so you write the definition of the image like (install this, copy that from the host or build that.......etc) in the image file and then you tell docker to build this image and then RUN containers from that image
so if you have like 1 image and you run 2 containers from it they both will have the same instructions( definition )
what happened with you
when you invoke the RUN command first thing you will see is
Unable to find image 'carinamarina/hello-world-app:latest' locally
That's mean that the local docker will not find the image(blueprint) locally with the name docker.io/carinamarina/hello-world-app so it will do the following steps
so it will start pulling the image from the remote registry
then
then it will start extracting the layers of the image
then it will start the container and show the logs from INSIDE CONTAINER
Why it didn't run with you
the application is running inside the container on port 5000
the container has a whole different network than the host that's running on (centos7 machine in your case )
you will have to make a port forwarding between the docker network and the host network so you can USE the application from the HOST
you can read more about that here docker networking
I recommend the following places to start with
let's play with docker
docker tutorial for beggines
Related
I am trying to run the puppet pupperware suite (all 3 servers/puppet server/puppet DB/DB server).
I am using the official Yaml file provided by puppetlabs for docker compose : https://github.com/puppetlabs/pupperware/blob/master/docker-compose.yml
When I run that Yaml file in docker compose however, I am running into the following error (from docker-compose logs):
postgres_1 | ls: cannot open directory '/docker-entrypoint-initdb.d/': Permission denied
And as a result, the build fails (only the puppet server comes up, but not the other ones).
My docker host is a Fedora 33 virtual machine running inside a Proxmox environment. Proxmox runs on the physical host.
I have disabled SELinux, and I am running docker (moby) rootless. My local user (uid 1000) can run docker without sudo.
I believe I need to set permission in the container (probably via a Dockerfile) but I am not sure how to change that and I am not sure how to use a Dockerfile and docker-compose simultaneously.
thank you for your help
The docker-compose file is from the Puppet 6 era. The docker images that the Pupperware setup currently pulls, are latest, which is Puppet 7.
I got my pre-existing setup functioning again by changing the image names to:
puppet/puppetserver:6.14.1
postgres:9.6
puppet/puppetdb:6.13.1
Maybe this works for you as well.
well, since it's been a month and you have no answers I will tell try to help you with what I know.
You should put a Dockerfile in the root of your project. It contains commands to be run by the docker daemon AND the commands run by the linux inside the container. Then it runs through the contents of your docker-compose.yml and runs the commands in there.
So to solve the permission problem you should add RUN, which executes the linux command in Bash and add data to the folder.
Also look at this answer
how to configure docker containers proxy ?
First of all,
I tried to use the way that setted '/etc/systemd/system/docker.service.d/http-proxy.conf' (https://docs.docker.com/config/daemon/systemd/#httphttps-proxy) and it really works for docker daemon, but it doesn't work for docker containers, it seems this way just take effect for some command like 'docker pull'
Secondary,
I have a lot of docker containers, I don't want to use 'docker run -e http_proxy=xxx... ' command every time when I start a container.
So I guess if there is such a way automatically load the global configuration file when the container starts, I googled it and got it to set the file '~/.docker/config.json'(How to configure docker container proxy?, this way still does not work for me.
(
my host machine system is centos7, here is my docker -v:
Docker version 1.13.1, build 6e3bb8e/1.13.1
)
I feel that it may be related to my docker version or the docker started by the systemd service, so ~/.docker/config.json does not take effect.
Finally ,
I just hope that modifying configuration files will allow all my containers to automatically configure environment variables when it start (that is auto set environment variables 'http_proxy=http://HostIP:8118 https_proxy=http://HostIP:8118' when a container start, like Dockerfile param ENV) . I want to know if there is such a way? And if this way can be realised I can make the container use the host's proxy, after all, my host's agent is working properly.
But I was wrong, I tried to run a container,then set http_proxy=http://HostIP:8118 and https_proxy=http://HostIP:8118, but when I use the command 'wget facebook.com' and I got 'Connecting to HostIP:8118... failed: No route to host.', But, the host machine(centos7) can successfully execute the wget, And I can successfully ping the host in the container. I don't know why it might be related to firewalls and the 8118 port.
It is Over,
OMG.. I have no other way, can anyone help me?
==============================
ps:
You can see from the screenshot below, I actually want to install goa and goagen but report an error, maybe because of network reasons, I want to open the agent to try, so...only have the above problem.
1.my go docker container
enter image description here
go docker wget
2.my host
my host wget
You need version 17.07 or more recent to automatically pass the proxy to containers you start using the config.json file. The 1.13 releases are long out of support.
This is well documented from docker:
https://docs.docker.com/network/proxy/
In my Dockerfile I am trying to install multiple services and want to have them all start up automatically when I launch the container.
One among the services is mysql and when I launch the container I don't see the mysql service starting up. When I try to start manually, I get the error:
Failed to get D-Bus connection: Operation not permitted
Dockerfile:
FROM centos:7
RUN yum -y install mariadb mariadb-server
COPY start.sh start.sh
CMD ["/bin/bash", "start.sh"]
My start.sh file:
service mariadb start
Docker build:
docker build --tag="pbellamk/mariadb" .
Docker run:
docker run -it -d --privileged=true pbellamk/mariadb bash
I have checked the centos:systemd image and that doesn't help too. How do I launch the container with the services started using systemctl/service commands.
When you do docker run with bash as the command, the init system (e.g. SystemD) doesn’t get started (nor does your start script, since the command you pass overrides the CMD in the Dockerfile). Try to change the command you use to /sbin/init, start the container in daemon mode with -d, and then look around in a shell using docker exec -it <container id> sh.
Docker is designed around the idea of a single service/process per container. Although it definitely supports running multiple processes in a container and in no way stops you from doing that, you will run into areas eventually where multiple services in a container doesn't quite map to what Docker or external tools expect. Things like moving to scaling of services, or using Docker swarm across hosts only support the concept of one service per container.
Docker Compose allows you to compose multiple containers into a single definition, which means you can use more of the standard, prebuilt containers (httpd, mariadb) rather than building your own. Compose definitions map to Docker Swarm services fairly easily. Also look at Kubernetes and Marathon/Mesos for managing groups of containers as a service.
Process management in Docker
It's possible to run systemd in a container but it requires --privileged access to the host and the /sys/fs/cgroup volume mounted so may not be the best fit for most use cases.
The s6-overlay project provides a more docker friendly process management system using s6.
It's fairly rare you actually need ssh access into a container, but if that's a hard requirement then you are going to be stuck building your own containers and using a process manager.
You can avoid running a systemd daemon inside a docker container altogether. You can even avoid to write a special start.sh script - that is another benefit when using the docker-systemctl-replacement script.
The docker systemctl.py can parse the normal *.service files to know how to start and stop services. You can register it as the CMD of an image in which case it will look for all the systemctl-enabled services - those will be started and stopped in the correct order.
The current testsuite includes testcases for the LAMP stack including centos, so it should run fine specifically in your setup.
I found this project:
https://github.com/defn/docker-systemd
which can be used to create an image based on the stock ubuntu image but with systemd and multiuser mode.
My use case is the first one mentioned in its Readme. I use it to test the installer script of my application that is installed as a systemd service. The installer creates a systemd service then enables and starts it. I need CI tests for the installer. The test should create the installer, install the application on an ubuntu, and connect to the service from outside.
Without systemd the installer would fail, and it would be much more difficult to write the test with vagrant. So, there are valid use cases for systemd in docker.
I have a PHP project in a Docker container. I ran it and it works.
Question: How do I open this project in PhpStorm so that I can edit the files?
I read that I can connect to container via SSH. So where can I get login, password and an IP address to Docker container?
Well, it's not impossible, but I think the workflow is upside down. You build a container to run it, not to distribute projects for others to edit. Yes, you can run a SSH service inside your container so you can connect to it, but if that's not the containers main purpose then I'd advice against it. Consider a container's resources to be encapsulated and for a reason.
Having a container for the sole purpose of distributing a "non-running" php-storm project sounds weird. It's more likely that you actually want to mount your own project into said container that is running a web-server or php etc. If you tell us more you can probably get a better answer.
Anyway, you can copy files from the container using docker cp
docker cp CONTAINER:SRC_PATH DEST_PATH
And you can run commands with docker exec and even start up a shell to make changes (this is pretty much the "docker ssh" you are asking for - but it's not intended for your IDE to connect and make changes with):
docker exec -it <mycontainer> bash
To answer part of your question:
So where can i get login, password and ip-address to docker-container?
> docker ps # Shows running containers
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0bd4e5f2d116 samples "nginx -g 'daemon ..." 8 hours ago Up 8 hours 192.168.1.10:8083->80/tcp samples
e021f9bc74c4 nginx:alpine "nginx -g 'daemon ..." 10 days ago Up 8 hours 80/tcp, 0.0.0.0:8080->8080/tcp web
Then when you know the container id:
> docker inspect e02
* Shows lots of info about the container, including network and IP. (However login and pass not so much) *
Perhaps you are thinking of docker-machine which lets you configure connections to remote (and local) machines which runs docker images. docker-machine has a docker-machine ssh command that lets you connect directly to that machine. Having connected, you still need to use docker exec ... to access your running containers so it probably won't help you much. (You use docker-machine to push containers to remote machines and also start/stop containers remotely).
This is definitely possible and I see nothing wrong with using Docker for local development. In this case, I think the best way to achieve what you want is to mount the Docker file directory as a volume on your computer. You can do this within your docker-compose.yml (assuming you are using one) by adding something like this...
volumes:
- ./foo:/var/www/html
Change foo to whatever you like and /var/www/html would be the path within the Docker container files that you want to edit. You will then be able to see the "foo" directory on your computer next to your docker-compose.yml file and when you open it you should have direct access to the files you need to setup PhpStorm to point to as you normally would.
Of note... I'm using MacOS so if you are using Docker on Windows I believe you need to make additional accommodations.
forward SSH port 22 to a new port (perhaps 2233) and then use phpstorm built-in ssh (TOOLS -> SSH)
https://blog.jetbrains.com/phpstorm/2013/08/using-the-phpstorm-built-in-ssh-terminal-and-remote-ssh-external-tools/
I created a data only container containing static HTML files that are intended to be consumed by a nginx container. Goal is that my webapp is providing a volume that nginx can use.
For this reason I created a simple Dockerfile:
FROM scratch
MAINTAINER me <me#me.com>
ADD dist/ /webappp/
When I run the created container from command line run -d -v /webappp --name webapp myOrg/webapp echo yo
I get the error Error response from daemon: Cannot start container db7fd5cd40d76311f8776b1710b4fe6d66284fe75253a806e281cd8ae5169637: exec: "echo": executable file not found in $PATH which if of course correct because the image has no commands at all the can be executed. Running a container without a command is not possible.
While this error on command line is not a big problem for me because I know the data container is still created and can now be accessed by nginx it turns out to be a no go if I want to automate it with Vagrant. The automated processes always fail because of this error.
My only solution so far is to extend my little handy image from from a distro which IMHO doesn't make sense for a data only container in order just to call echo or true!
Is there a NOP exec command in docker or does docker need always to execute something, is it possible to run a scratch container that does nothing or does not produce an error.
As mentioned in the Docker manual: The container don't need to be running. It also doesn't say that the container "should" be able to run at all.
So instead of echoing something stupid by running a data only container e.g. docker run -v /webappp --name webapp myOrg/webapp echo yo
It is already enough to just create the container and never run/start it.
docker create -v /webappp --name webapp myOrg/webapp
Note to self: Vagrant does not support docker create when provisioning!
Why are you using scratch?
Just use the nginx image as a base. You already have the image cached so it won't take up any more space and you'll be able to call echo.
Some references for data containers:
Data-only container madness
Understanding Volumes in Docker
Offiical docs on data containers