I have the following problem. I want to use this docker-compose file, since it takes over the set-up after the matomo start. I want to use it during the development and need some data after the container start in the mariadb. I found the table where I have to insert a sql script which is already written. Now my problem
I need data in the mariadb, therefore I could use the docker-entrypoint-initdb.d. Unfortunately, at this time there are no tables, since matomo which insert the table structure waits until the db is running. The matomo container seems to have no such entrypoint which I could use.
Thus I have more or less a matomo depends_on mariadb and mariadb depends_on matomo.
I have the following question: Are there better ways than write my own image where I adapt the start-up.sh to check my own entrypoint to insert a sql script? As mentioned it is only for the development, I want to keep it simple.
Thanks in advance
Matthias
So we tried out some stuff.
First of all, we used an basic instanc of Matomo and MariaDb and hoped, that the configuration during the first steps has to be done one time. If this would be the case, we would make an database dump and insert it in the MariaDb during start, since there is an endpoint available. Unfortunately does Matomo need the IP of MariaDb and this IP is not the same as the localhost, it depends on the docker container, which changes every start up. Thus this approach was also not successful.
After this we found out, that bitnami changed there docker image in the way I planned it, a few days after I downloaded it. They added exactely what I needed in the post-init shell script.
Now I use the endpoint and everything is working.
Related
Docker Compose Customization - as per the reference guide if we point to mysql in the dockercompose.yml, will that start the mysql data base process, along with other processes kafka, zookeeper, and dataflowserver, or do we need to first manually start the the mysql database process separately before docker-compose up command.
Changing the docker-compose.yml file to point to mysql configuration, does indeed start a
springdataflow_mysql_1 container process.
Creation of streams, and deployment persists these definitions to the STREAM_DEFINITIONS TABLE AND STREAM_DEPLOYMENTS respectively under the DATAFLOW database.
Glad you got it working! You can customize to swap the DB or Message Broker of your choice. The promise of docker-compose is to bring up the described components in an order and there's simple logic (via depends_on) that waits for all the middleware components to start. We describe the customization here.
Otherwise, the autoconfiguration will kick-in to configure the environment for the desired database as far as the right driver is in the classpath of SCDF - see supported databases. And yes, we already ship the open-source MariaDB driver, so it works just fine with MySQL.
I have docker-compose.yml file with a set of services inside. In some cases for development I want docker not to start on of services which is a helper one when I run docker-compose up.
Is it possible to do and if so, how to?
I suppose this is still not implemented yet, see the discussion here.
I am venturing into using docker and trying to get a firm grasp of the product.
While I love everything it promises it is a big change from doing things manually.
Right now I understand how to build a container, attach your code, commit and push it to your repo.
But what I am really wondering is how do I update my code once deployed, for example, I have some minor bug fixes but no change to dependencies but I also run a database in the same container.
Container:
Node & NPM
Nginx
Mysql
php
Right now the only way I understand you can do it is to close thje container re pull the new container and run, but I am thinking you will lose database data.
I have been reading into https://docs.docker.com/engine/tutorials/dockervolumes/
and thinking maybe the container mounts a data file that persists between containers.
What I am trying to do is run a web app/website with the above container layout and just change code with latest bugfixes/features.
You're quite correct. Docker images are something you should be rebuilding and discarding with each update - avoid commit wherever possible (outside your build scripts anyway).
Persistent state should be managed via data containers that you then mount with your image. Thus your "data" is decoupled from that specific version and instance of the application.
I created a docker image with pre-installed packages in it (apache, mysql, memcached, solr, etc). Now I want to run a command in a container made from this image, and this command relies on all my packages. I want to have all of them started when I start a new container.
I tried to use /sbin/init, but it doesn't work in docker.
The general opinion is to use a process manager to do this. I wont go into the details here, since I wrote a blog on that: http://blog.trifork.com/2014/03/11/using-supervisor-with-docker-to-manage-processes-supporting-image-inheritance/
Note that another rather general opinion is to split your containers. MySQL generally is on a different container, but you can try to get that to work later on as well of course :)
I see that this is an old topic, however, for someone who just came across it - docker-compose can be used to connect multiple containers, so most of the processes can be split up in different containers. Furthermore, as mentioned earlier, different process managers can be used in order to run processes simultaneously and the one that I would like to mention is Chaperone. I find it really easy to use and slightly better than supervisor!
docker compose and docker sync -> You can not go wrong applying this concept.
-Glynn
I have a hunch that docker could greatly improve my webdev workflow - but I haven't quite managed to wrap my head around how to approach a project adding docker to the stack.
The basic software stack would look like this:
Software
Docker image(s) providing custom LAMP stack
Apache with several modules
MYSQL
PHP
Some CMS, e.g. Silverstripe
GIT
Workflow
I could imagine the workflow to look somewhat like the following:
Development
Write a Dockerfile that defines a LAMP-container meeting the requirements stated above
REQ: The machine should start apache/mysql right after booting
Build the docker image
Copy the files required to run the CMS into e.g. ~/dev/cmsdir
Put ~/dev/cmsdir/ under version control
Run the docker container, and somehow mount ~/dev/cmsdir to /var/www/ on the container
Populate the database
Do work in /dev/cmsdir/
Commit & shut down docker container
Deployment
Set up remote host (e.g. with ansible)
Push container image to remote host
Fetch cmsdir-project via git
Run the docker container, pull in the database and mount cmsdir into /var/www
Now, this looks all quite nice on paper, BUT I am not quite sure whether this would be the right approach at all.
Questions:
While developing locally, how would I get the database to persist between reboots of the container instance? Or would I need to run sql-dump every time before spinning down the container?
Should I have separate container instances for the db and the apache server? Or would it be sufficient to have a single container for above use case?
If using separate containers for database and server, how could I automate spinning them up and down at the same time?
How would I actually mount /dev/cmsdir/ into the containers /var/www/-directory? Should I utilize data-volumes for this?
Did I miss any pitfalls? Anything that could be simplified?
If you need database persistance indepent of your CMS container, you can use one container for MySQL and one container for your CMS. In such case, you can have your MySQL container still running and your can redeploy your CMS as often as you want independently.
For development - the another option is to map mysql data directories from your host/development machine using data volumes. This way you can manage data files for mysql (in docker) using git (on host) and "reload" initial state anytime you want (before starting mysql container).
Yes, I think you should have a separate container for db.
I am using just basic script:
#!/bin/bash
$JOB1 = (docker run ... /usr/sbin/mysqld)
$JOB2 = (docker run ... /usr/sbin/apache2)
echo MySql=$JOB1, Apache=$JOB2
Yes, you can use data-volumes -v switch. I would use this for development. You can use read-only mounting, so no changes will be made to this directory if you want (your app should store data somewhere else anyway).
docker run -v=/home/user/dev/cmsdir:/var/www/cmsdir:ro image /usr/sbin/apache2
Anyway, for final deployment, I would build and image using dockerfile with ADD /home/user/dev/cmsdir /var/www/cmsdir
I don't know :-)
You want to use docker-compose. Follow the tutorial here. Very simple. Seems to tick all your boxes.
https://docs.docker.com/compose/
I understand this post is over a year old at this time, but I have recently asked myself very similar questions and have several great answers to your questions.
You can setup a MySQL docker instance and have data persist on a stateless data container, aka the data container does not need to be actively running
Yes I would recommend having a separate instance for your web server and database. This is the power of Docker.
Check out this repo I have been building. Basically it is as simple as make build & make run and you can have a web server and database container running locally.
You use the -v argument when running the container for the first time, this will link a specific folder on the container to the host running the container.
I think your ideas are great and it is currently possible to achieve all that you are asking.
Here is a turn key solution achieving all of the needs you have listed.
I've put together an easy to use docker compose setup that should match your development workflow requirements.
https://github.com/ehyland/docker-silverstripe-dev
Main Features
Persistent DB
Your choice of HHVM + NGINX or Apache2 + PHP5
Debug and set breakpoints with xDebug
The README.md should be clear enough to get you started.