I'm looking for a way of setting some commands to run in my Dockerfile, once I have run "docker run"
My use case is I have 2 containers, Web (Apache, PHP), DB (MySQL)
When I execute "docker run" on the Web container and the link is made to the DB container. I want to execute the migrations script in my Web container.
I can use "docker exec" to get into the box and run the migrations which works. I just want to automate this with Dockerfile if possible or with another provisioner.
Thanks
Simon
Just have a script in either image (seems to make more sense for it to live in your DB image), and execute it before you start your web server. Even better, store your MySQL data in a volume so that on your next run or restart of the db container you don't have to worry about the migration:
# migrate data into your volume
docker run --name mysql-data -v /my/mysql/data mysqlImage migrate.sh
# run mysql
docker run --name mysql -d --volumes-from mysql-data mysqlImage
# run www
docker run --name www -d --link mysql:mysql phpImage
You can also just set your entry point to a custom script, let's call it /my/run.sh:
#!/usr/bin/env bash
mysqlimport ...
# don't know the syntax, but run apache in non-daemon mode
apache
Then:
docker run --name www -d --link mysql:mysql --entrypoint /my/run.sh phpImage
Docker is designed for running one process, but can run several. You should look at supervisor, see https://docs.docker.com/articles/using_supervisord/
If you really want to make the db migration a part of your docker container run, you might have a more complex script as a command, which would first do the migrations script and then run the web service.
Related
I'm trying to develope Plone project with Docker, i have used this official image of Plone 5.2.0, the images is built a run perfectly with:
$ docker build -t plone-5.2.0-official-img .
$ docker run -p 8080:8080 -it plone-5.2.0-official-cntr
But the plone restarts each time i run the docker container asking to create the project from skratch.
Anybody could help me with this.
Thanks in advance.
You can also use a volume for data like:
$ docker run -p 8080:8080 -it -v plone-data:/data plone-5.2.0-official-cntr
The next time you'll run a new container it will re-use previous data.
If this helps,
Volumes are the docker way to persist data. You can read it up over here
When running the container just add a -v option and specify your path to store your data.
$ docker run -p "port:port" -it -v "path"
This is expected behavior, because docker run starts a new container, which doesn't have the state from your previous container.
You can use docker start CONTAINER, which will have the state from that CONTAINER's setup
https://docs.docker.com/engine/reference/commandline/start/
A more common approach is to use docker-compose.yml and docker-compose up -d, which will, in most cases, reuse previous state.
https://docs.docker.com/compose/gettingstarted/
I don't want to install postgres locally but as I have it in my docker container, I'd like to be able to run its commands and utils, like pg_dump myschema > schema.sql.
How can I run commands related to running containers inside of them?
docker exec -it <container> <cmd>
e.g.
docker exec -it your-container /bin/bash
There are different options
You can actually copy files to docker using docker cp command. Copy required files to docker and then you can go inside the docker and run the command.
Make some modification in docker file for docker image creation. Its actually really simple to create docker file. Then using EXPOSE option you can expose a port. After that you can use docker run --publish ie.. -p option to publish a container’s port(s) to the host. Then you can access postgres from outside and run scripts from outside by creating connection.
In the first option you need go inside the containers. For that first list running dockers using docker ps command. After that you can use docker exec -it container_name /bin/bash command
When I run my docker-compose, it creates a web container and postgres container.
I want to manually trigger my Django tests to run, via something like
docker-compose run web python manage.py test
the problem with this is it creates a new container (requiring new migrations to be applied, housekeeping work, etc.)
The option I'm leaning towards it doing something like
docker exec -i -t <containerid> python manage.py test
This introduces a new issue which is that I must run docker ps first to grab the container name. The whole point of this is to automatically run the tests for each build so it has to be automated, manually running docker ps is not a solution.
So is there a way to dynamically grab the container id or is there a better way to do this? This would not be an issue if you could assign container names in docker-compose
While an accepted answer was provided, the answer itself is not really related to the title of this question:
Dynamically get a running container name created by docker-compose
To dynamically retrieve the name of a container run by docker-compose you can execute the following command:
$(docker inspect -f '{{.Name}}' $(docker-compose ps -q web) | cut -c2-)
Just use docker-compose exec. It will execute in the already-running container instead of starting a new one.
docker-compose exec web python manage.py test
You can assign a name to a container using container_name option on docker-compose.yml file.
container_name: container_name
Then, you can easily run commands in that container using.
docker exec container_name python manage.py test.
For more information about docker-compose options, visit the official documentation.
https://docs.docker.com/compose/compose-file/
Use docker-compose ps -q to find the ID of the container and run the command in that:
docker exec -it $(docker-compose ps -q) sh
PS: This will NOT work if you have multiple containers in the docker-compose file.
I am using nodeBB to start a server you can run ./nodebb start to stop you can do ./nodebb stop. Now that I have dockerized it http://nodebb-francais.readthedocs.org/projects/nodebb/en/latest/installing/docker/nodebb-redis.html I am not sure how I can interact with it.
I have followed the steps "Using docker-machine mac os x"
docker run --name my-forum-redis -d -p 6379:6379 nodebb/docker:ubuntu-redis
Then
docker run --name my-forum-nodebb --link my-forum-redis:redis -p 80:80 -p 443:443 -p 4567:4567 -P -t -i nodebb/docker:ubuntu
Then
docker start my-forum-nodebb
I had an issue with redis address in use, so I want to fix that and restart but I am not sure how? Also I would like to issue the command grunt in the project directory, again not sure how?
My question is how can I interact with an app inside a docker container as if I had direct access to the project folder itself? Am I missing something?
All code in this answer is untested, as I'm currently at a computer without docker.
See whether the containers are still running
docker ps
Stop misconfigured containers
docker stop my-forum-redis
docker stop my-forum-nodebb
Remove misconfigured containers and their volumes
(The docker images they are based on will be retained.)
docker rm --volumes --force stop my-forum-nodebb
docker rm --volumes --force my-forum-redis
Start again
Then, issue your 3 commands again, now with the correct ports.
Execute arbitrary commands inside container
Also I would like to issue the command grunt in the project directory, again not sure how?
You probably want to do the following after the docker run --name my-forum-nodebb ... command but before docker start my-forum-nodebb.
docker run accepts a command to execute instead of the container's default command. Let's first use this to find out where in the container we'd land:
docker run my-forum-nodebb pwd
If that is the directory where you want to run grunt, just go forward with it:
docker run my-forum-nodebb grunt
If not, you'll have to stuff several commands into a single one. You can do that by invoking a shell:
docker run my-forum-nodebb bash -c 'cd /path/to/project/dir; grunt'
where /path/to/project/dir is to be replaced by where you want to run grunt.
My problem is:
docker run -d -p 8080:8080 asd/jenkins # everything's ok
# made changes at jenkins
docker commit container_with_jenkins # comitted
docker run -d -p 8080:8080 image_from_container_with_changes
# => Error: create: No command specified
Am I missing something?
How do one work with docker's images and save changes within container?
When you commit an image it does not inherit the CMD from its parent image. So when you start a container based on the new image, you need to supply a run command.
docker run -d image_from_container_with_changes java -jar /var/lib/jenkins/jenkins.war
where the run command of course depends on your specific installation.
Jenkins stores it's configuration in a directory, e.g. /root/.jenkins. What I would recommend is to create a directory on the host and link this as a volume:
docker run -v {absolute_path_to_jenkins_dir}:/root/.jenkins -d asd/jenkins
If you start a new container in the same way, it will have the same jobs etc. In case you make changes that do go into this directory (I don't know by head where plugins or updates are installed) you still might want to make a new image. In that case, use the -run option when you commit your container to specify new configuration,
docker commit -run='{"Cmd": ["java", "-jar", "/var/lib/jenkins/jenkins.war"]}' abc1234d