Hi how would I connect a mysql container to another container so that my application in one of those container can use Mysql? Based on this reference I need to run this
docker run --name some-app --link some-mysql:mysql -d application-that-uses-mysql
But I have no idea what does some-app mean and the application-that-uses-mysql? does it mean the container ID that will use it? below is the list of the running container
--name some-app refers to the container name you want to assign to the container you are running.
application-that-uses-mysql refers to the image which you are using. This image requires mysql and you have connecte myqsl to this image by using the command --link some-mysql:mysql.
However, Links are deprecated but you still can connect to other containers using networks. An answer here should be able to help you out in setting the connections.
Edit: Removed duplicate and provided complete answer.
--name some-app gives the newly started container the name some-app.
--link some-mysql:mysql links the container with name some-mysql (this is the one running MySQL) to the new container so that it can be accessed as mysqlwithin the container.
application-that-uses-mysqlis the name of the image containing your app.
If you are unsure about the meaning of some parameters, there is always the documentation.
Related
I run Joomla image with :
docker run --name some-joomla --link test-mysql:mysql -p 8080:80 -d joomla
How can I change Joomla files ?
I think it is possible when specifying volume mapping, but I did not use that to run Joomla: is there a way to access Joomla files now ?
If I understand your question correctly, this thread should help you out.
Commit your container and create a new image from it.
Run a container from your just created image (and add the volume you need). Watch out for the port mappings, you either have to use other ports temporarily to check functionality of your new container, or you do step 3 beforehand.
If all works out, stop the old one.
If you want to check what's currently in the container, you can jump into it by running docker exec -it some-joomla bash (or sh, whatever Shell is installed in this image). You can then look for the files you want inside the container.
If you found them and you want to copy them on your local machine, you can run docker cp some-joomla:/your/path /path/on/local/machine.
I'm running the official solr 6.6 container used in a docker-compose environment without any relevant volumes.
If i modify a running solr container the data survives a restart.
I dont see any volumes mounted and it works for a plain solr container:
docker run --name solr_test -d -p 8983:8983 -t library/solr:6.6
docker exec -it solr_test /bin/bash -c 'echo woot > /opt/solr/server/solr/testfile'
docker stop solr_test
docker start solr_test
docker exec -it solr_test cat /opt/solr/server/solr/testfile
Above example prints 'woot'. I thought that a container doesnt persist any data? Also the documentation mentions that the solr cores are persisted in the container.
All i found, regarding container persistence is that i need to add volumes on my own like mentioned here.
So i'm confused: do containers store the data changed within the container or not? And how does the solr container achive this behaviour? The only option i see is that i misunderstood peristence in case of docker or the build of the container can set some kind of option to achieve this which i dont know about and didnt see in the solr Dockerfile.
This is expected behaviour.
The data you create inside a container persist as long as you don't delete the container.
But think containers in some way of throw away mentality. Normally you would want to be able to remove the container with docker rm and spawn a new instance including your modified config files. That's why you would need an e.g. named volume here, which survives a container life cycle on your host.
The Dockerfile, because you mention it in your question, actually only defines the image. When you call docker run you create a container from it. Exactly as defined in the image. A fresh instance without any modifications.
When you call docker commit on your container you snapshot it (including the changes you made to the files) and create a new image out of it. They achieve the data persistence this way.
The documentation you referring to explains this in detail.
I have a container with PHP and a linked container with MySQL database, because I need an ability to run PHPUnit with database (integration tests).
The basic command looks like this:
docker run -i --rm --link db binarydata/phpunit php script.php
I have created db container and started it before running this command.
binarydata/phpunit container gets removed after a command was run. But db container stays up and running.
Question is: how can I achieve --rm functionality on a linked container, so it will be removed too after command was executed?
how can I achieve --rm functionality on a linked container, so it will be removed too after command was executed?
First, you don't have to use --link anymore with docker 1.10+. docker-compose will create for you a network in which all containers see each others.
And with docker-compose alias, you can declare your binary/phpunit as "db" for other containers to use.
Second, with that network in place, if you stop/remove the php container, it will be removed from said network, including its alias 'db'.
That differs from the old link (docker 1.8 and before), which would modify the /etc/hosts of the container who needed it. In that case, removing the linked container would not, indeed, change the /etc/hosts.
With the new embedded docker-daemon DNS, there is no longer a need for that.
Matt suggests in the comments the following command and caveats:
docker-compose up --abort-on-container-exit --force-recreate otherwise the command never returns and the db container would never be removed.
up messes with stdout a bit though.
The exit status for the tests will be lost too, it's printed to screen instead.
There is something I'm missing in many docker examples and that is persistent data. Am I right if I conclude that every container that is stopped will lose it's data?
I got this Prestashop image running with it's internal database:
https://hub.docker.com/r/prestashop/prestashop/
You just run docker run -ti --name some-prestashop -p 8080:80 -d prestashop/prestashop
Well you got your demo then, but not very practical.
First of all I need to hook an external MySQL container, but that one will also lose all it's data if for example my server reboots.
And what about all the modules and themes that are going to be added to the prestashop container?
It has to do with Volumes, but it is not clear to my how all the the host volumes needs to be mapped correctly and what path to the host is normally chosen. /opt/prestashop er something?
First of all, I don't have any experience with PrestaShop. This is an example which you can use for every docker container (from which you want to persist the data).
With the new version of docker (1.11) it's pretty easy to 'persist' your data.
First create your named volume:
docker volume create --name prestashop-volume
You will see this volume in /var/lib/docker/volumes:
prestashop-volume
After you've created your named volume container you can connect your container with the volume container:
docker run -ti --name some-prestashop -p 8080:80 -d -v prestashop-volume:/path/to/what/you/want/to/persist :prestashop/prestashop
(when you really want to persist everything, I think you can use the path :/ )
Now you can do what you want on your database.
When your container goes down or you delete your container, the named volume will still be there and you're able to reconnect your container with the named-volume.
To make it even more easy you can create a cron-job which creates a .tar of the content of /var/lib/docker/volumes/prestashop-volume/
When really everything is gone you can restore your volume by recreating the named-volume and untar your .tar-file in it.
In the following guide when establishing how to make data-only docker containers they use the docker create command:
docker create -v /dbdata --name dbdata training/postgres /bin/true
However, as far as I know, with docker-compose you can only run containers not just create them. Is there any way to currently to use docker create in place of docker run for certain containers?
Moreover are there any negative consequences to running a data-only container instead of simply creating it?
Currently the best way seems to be addressed by these two github issue threads:
https://github.com/docker/compose/issues/942
https://github.com/docker/compose/pull/1754
The addition of an option that doesn't run the containers is still under debate, but the solution for now seems to be to manually stop the container after running it
docker-compose <container-name> stop
You can also change the entrypoint in the .yml file to /bin/true if you don't want to deal with manually stopping it.