How to open a project in Docker in PhpStorm? - docker

I have a PHP project in a Docker container. I ran it and it works.
Question: How do I open this project in PhpStorm so that I can edit the files?
I read that I can connect to container via SSH. So where can I get login, password and an IP address to Docker container?

Well, it's not impossible, but I think the workflow is upside down. You build a container to run it, not to distribute projects for others to edit. Yes, you can run a SSH service inside your container so you can connect to it, but if that's not the containers main purpose then I'd advice against it. Consider a container's resources to be encapsulated and for a reason.
Having a container for the sole purpose of distributing a "non-running" php-storm project sounds weird. It's more likely that you actually want to mount your own project into said container that is running a web-server or php etc. If you tell us more you can probably get a better answer.
Anyway, you can copy files from the container using docker cp
docker cp CONTAINER:SRC_PATH DEST_PATH
And you can run commands with docker exec and even start up a shell to make changes (this is pretty much the "docker ssh" you are asking for - but it's not intended for your IDE to connect and make changes with):
docker exec -it <mycontainer> bash
To answer part of your question:
So where can i get login, password and ip-address to docker-container?
> docker ps # Shows running containers
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0bd4e5f2d116 samples "nginx -g 'daemon ..." 8 hours ago Up 8 hours 192.168.1.10:8083->80/tcp samples
e021f9bc74c4 nginx:alpine "nginx -g 'daemon ..." 10 days ago Up 8 hours 80/tcp, 0.0.0.0:8080->8080/tcp web
Then when you know the container id:
> docker inspect e02
* Shows lots of info about the container, including network and IP. (However login and pass not so much) *
Perhaps you are thinking of docker-machine which lets you configure connections to remote (and local) machines which runs docker images. docker-machine has a docker-machine ssh command that lets you connect directly to that machine. Having connected, you still need to use docker exec ... to access your running containers so it probably won't help you much. (You use docker-machine to push containers to remote machines and also start/stop containers remotely).

This is definitely possible and I see nothing wrong with using Docker for local development. In this case, I think the best way to achieve what you want is to mount the Docker file directory as a volume on your computer. You can do this within your docker-compose.yml (assuming you are using one) by adding something like this...
volumes:
- ./foo:/var/www/html
Change foo to whatever you like and /var/www/html would be the path within the Docker container files that you want to edit. You will then be able to see the "foo" directory on your computer next to your docker-compose.yml file and when you open it you should have direct access to the files you need to setup PhpStorm to point to as you normally would.
Of note... I'm using MacOS so if you are using Docker on Windows I believe you need to make additional accommodations.

forward SSH port 22 to a new port (perhaps 2233) and then use phpstorm built-in ssh (TOOLS -> SSH)
https://blog.jetbrains.com/phpstorm/2013/08/using-the-phpstorm-built-in-ssh-terminal-and-remote-ssh-external-tools/

Related

Access files on host server from the Meteor App deployed with Meteor Up

I have a Meteor App deployed with Meteor UP to Ubuntu.
From this App I need to read a file which is located outside App container on the host server.
How can I do that?
I've tried to set up volumes in the mup.js but no luck. It seems that I'm missing how to correctly provide /host/path and /container/path
volumes: {
// passed as '-v /host/path:/container/path' to the docker run command
'/host/path': '/container/path',
'/second/host/path': '/second/container/path'
},
Read the docs for Docker mounting volumes but obviously can't understand it.
Let's say file is in /home/dirname/filename.csv.
How to correctly mount it into App to be able to access it from the Application?
Or maybe there are other possibilities to access it?
Welcome to Stack Overflow. Let me suggest another way of thinking about this...
In a scalable cluster, docker instances can be spun up and down as the load on the app changes. These may or may not be on the same host computer, so building a dependency on the file system of the host isn't a great idea.
You might be better to think of using a file storage mechanism such as S3, which will scale on its own, and disk storage limits won't apply.
Another option is to determine if the files could be stored in the database.
I hope that helps
Let's try to narrow the problem down.
Meteor UP is passing the configuration parameter volumes directly on to docker, as they also mention in the comment you included. It therefore might be easier to test it against docker directly - narrowing the components involved down as much as possible:
sudo docker run \
-it \
--rm \
-v "/host/path:/container/path" \
-v "/second/host/path:/second/container/path" \
busybox \
/bin/sh
Let me explain this:
sudo because Meteor UP uses sudo to start the container. See: https://github.com/zodern/meteor-up/blob/3c7120a75c12ea12fdd5688e33574c12e158fd07/src/plugins/meteor/assets/templates/start.sh#L63
docker run we want to start a container.
-it to access the container (think of it like SSH'ing into the container).
--rm to automatically clean up - remove the container - after we're done.
-v - here we give the volumes as you define it (I here took the two directories example you provided).
busybox - an image with some useful tools.
/bin/sh - the application to start the container with
I'd expect that you also cannot access the files here. In this case, dig deeper on why you can't make a folder accessible in Docker.
If you can, which would sound weird to me, you can start the container and try to access into the container by running the following command:
docker exec -it my-mup-container /bin/sh
You can think of this command like SSH'ing into a running container. Now you can check around if it really isn't there, if the credentials inside the container are correct, etc.
At last, I have to agree it #mikkel, that it's not a good option to mount a local directoy, but you can now start looking into how to use docker volume to mount a remote directory. He mentioned S3 by AWS, I've worked with AzureFiles on Azure, there are plenty of possibilities.

how to configure docker containers proxy?

how to configure docker containers proxy ?
First of all,
I tried to use the way that setted '/etc/systemd/system/docker.service.d/http-proxy.conf' (https://docs.docker.com/config/daemon/systemd/#httphttps-proxy) and it really works for docker daemon, but it doesn't work for docker containers, it seems this way just take effect for some command like 'docker pull'
Secondary,
I have a lot of docker containers, I don't want to use 'docker run -e http_proxy=xxx... ' command every time when I start a container.
So I guess if there is such a way automatically load the global configuration file when the container starts, I googled it and got it to set the file '~/.docker/config.json'(How to configure docker container proxy?, this way still does not work for me.
(
my host machine system is centos7, here is my docker -v:
Docker version 1.13.1, build 6e3bb8e/1.13.1
)
I feel that it may be related to my docker version or the docker started by the systemd service, so ~/.docker/config.json does not take effect.
Finally ,
I just hope that modifying configuration files will allow all my containers to automatically configure environment variables when it start (that is auto set environment variables 'http_proxy=http://HostIP:8118 https_proxy=http://HostIP:8118' when a container start, like Dockerfile param ENV) . I want to know if there is such a way? And if this way can be realised I can make the container use the host's proxy, after all, my host's agent is working properly.
But I was wrong, I tried to run a container,then set http_proxy=http://HostIP:8118 and https_proxy=http://HostIP:8118, but when I use the command 'wget facebook.com' and I got 'Connecting to HostIP:8118... failed: No route to host.', But, the host machine(centos7) can successfully execute the wget, And I can successfully ping the host in the container. I don't know why it might be related to firewalls and the 8118 port.
It is Over,
OMG.. I have no other way, can anyone help me?
==============================
ps:
You can see from the screenshot below, I actually want to install goa and goagen but report an error, maybe because of network reasons, I want to open the agent to try, so...only have the above problem.
1.my go docker container
enter image description here
go docker wget
2.my host
my host wget
You need version 17.07 or more recent to automatically pass the proxy to containers you start using the config.json file. The 1.13 releases are long out of support.
This is well documented from docker:
https://docs.docker.com/network/proxy/

Share windows directory to Linux docker container

I've been trying the whole day to accomplish a simplistic example of sharing a Windows directory to Linux container running on Windows Docker host.
Have read all the guidelines and run the following:
docker run -it --rm -p 5002:80 --name mount-test --mount type=bind,source=D:\DockerArea\PortScanner,target=/app/PortScannerWorkingDirectory barebonewebapi:latest
The origin PortScanner directory on host machine has got some text file in it. The container is created successfully.
The issue is that when I'm trying to
docker exec -it mount-test /bin/bash
and then list the mounted directory in the container PortScannerWorkingDirectory - it just shows that it's empty. Nor the C# code can read the contents of the host file in the mapped directory.
Am I missing something simple here? I feel like I got stuck and can't share files on the host Windows machine to Linux container.
After several days of dealing with the issue I've found quite apparent answer. Although I had had C and D drives already shared to Docker in Docker settings I did an experiment and re-shared both drives (there's a special button Reset Credentials for that purpose in Docker agent settings for Windows). After that the issue is resolved. So saving it here with the hope that it may help someone else since this seems to be a glitch with permissions or similar.
The issue is quite hard to diagnose - when there's an issue the Docker container just silently writes into its writable layer and no error pops up.
Go to the docker settings -> shared drives -> reset credentials.
and then click the drive and click apply button.
then execute following command as suggested by docker
docker run --rm -v c:/Users:/data alpine ls /data

Run commands on host from container command prompt

I use portainer to manage containers and it works great.
https://portainer.io/
But when I connect to console, I get the command prompt of container. Is there any way to run simple commands like ls /home/ that will list the files on host?
In other words is there any image that will mount the file system of host server "as-is"?
Here's an example using docker command line:
$ docker run --rm -it -v ~/Desktop:/Desktop alpine:latest /bin/sh
/ # ls /Desktop/
You can extend the approach to as far as you need to. Experiment with it. Learn about the different mount options.
I know the Docker app on MacOS provides a way for default volume mounts. Portainer also claims to provide a volume management screen, am yet to use it.
Hope this helps.
If you're dealing with services, or an existing, running container, you can in most cases access the shell directly. Let's say you have a container called "meow". You can run:
docker exec -it meow bash
and it will drop you into the bash shell. You'll actually need to know if bash is installed, or try calling sh instead.
The "i" option indicates it should be interactive, and the "t" option indicates it should emulate a TTY terminal. When you're done, you can hit Ctrl+D to exit out of the container.
First of all: You never ever want to do so.
Volumes mounted to containers are used to persist the container's data as containers are designed to be volatile -(the container itself shouldn't persist it s state so restarting the container n number of times should result in the same container state each time it starts)- so think of the volume as a the database where all the data (state of the container) should be stored.
Seeing volumes this way makes it easier to decide against sharing the host's entire file system, as this container would have read write permissions over the host OS files itself which is a huge security threat .
Sharing volumes across containers is considered a bad container architecture let alone sharing the entirety of the host file system.
I would propose simple ssh (or remote desktop) to your host if you require access to it to run commands or tasks on your host.
OR if your container requires access to a specific folder for some reason then you should consider mounting or binding that folder to the container
docker run -d --name devtest --mount source=myvol2,target=/app nginx:latest
I would recommend copying the content of that folder into a docker managed volume (a folder under the docker/volumes tree) and binding the container to this volume instead of the original folder to minimize the impact of your container on your host's OS.

Start service using systemctl inside docker container

In my Dockerfile I am trying to install multiple services and want to have them all start up automatically when I launch the container.
One among the services is mysql and when I launch the container I don't see the mysql service starting up. When I try to start manually, I get the error:
Failed to get D-Bus connection: Operation not permitted
Dockerfile:
FROM centos:7
RUN yum -y install mariadb mariadb-server
COPY start.sh start.sh
CMD ["/bin/bash", "start.sh"]
My start.sh file:
service mariadb start
Docker build:
docker build --tag="pbellamk/mariadb" .
Docker run:
docker run -it -d --privileged=true pbellamk/mariadb bash
I have checked the centos:systemd image and that doesn't help too. How do I launch the container with the services started using systemctl/service commands.
When you do docker run with bash as the command, the init system (e.g. SystemD) doesn’t get started (nor does your start script, since the command you pass overrides the CMD in the Dockerfile). Try to change the command you use to /sbin/init, start the container in daemon mode with -d, and then look around in a shell using docker exec -it <container id> sh.
Docker is designed around the idea of a single service/process per container. Although it definitely supports running multiple processes in a container and in no way stops you from doing that, you will run into areas eventually where multiple services in a container doesn't quite map to what Docker or external tools expect. Things like moving to scaling of services, or using Docker swarm across hosts only support the concept of one service per container.
Docker Compose allows you to compose multiple containers into a single definition, which means you can use more of the standard, prebuilt containers (httpd, mariadb) rather than building your own. Compose definitions map to Docker Swarm services fairly easily. Also look at Kubernetes and Marathon/Mesos for managing groups of containers as a service.
Process management in Docker
It's possible to run systemd in a container but it requires --privileged access to the host and the /sys/fs/cgroup volume mounted so may not be the best fit for most use cases.
The s6-overlay project provides a more docker friendly process management system using s6.
It's fairly rare you actually need ssh access into a container, but if that's a hard requirement then you are going to be stuck building your own containers and using a process manager.
You can avoid running a systemd daemon inside a docker container altogether. You can even avoid to write a special start.sh script - that is another benefit when using the docker-systemctl-replacement script.
The docker systemctl.py can parse the normal *.service files to know how to start and stop services. You can register it as the CMD of an image in which case it will look for all the systemctl-enabled services - those will be started and stopped in the correct order.
The current testsuite includes testcases for the LAMP stack including centos, so it should run fine specifically in your setup.
I found this project:
https://github.com/defn/docker-systemd
which can be used to create an image based on the stock ubuntu image but with systemd and multiuser mode.
My use case is the first one mentioned in its Readme. I use it to test the installer script of my application that is installed as a systemd service. The installer creates a systemd service then enables and starts it. I need CI tests for the installer. The test should create the installer, install the application on an ubuntu, and connect to the service from outside.
Without systemd the installer would fail, and it would be much more difficult to write the test with vagrant. So, there are valid use cases for systemd in docker.

Resources