I have a docker setup (php7,mysql and apache). This is working correctly.
However, I have to transfer the project on the server where there is already an apache running.
I was wondering how I could use the apache on the server to connect to my docker setup.
You can use either docker-compose or Dockerfile, or combination of both to use together.
You can read more about them in Docker Compose Docs and Dockerfile Docs.
In Simple Answer:
Create a docker-compose.yml file with contents you need as per above docs with a Dockerfile.
You should connect them together in your code by modifying files, like instead of localhost for database host, you should change it to the name you specify in docker-compose.yml file.
Also copying or adding some files in apache, like /etc/apache2/httpd.conf and /etc/apache2/sites-available/*.conf (all files ending with conf), and for mysql like /var/lib/mysql/ directory (database files), and of course your project files.
Then run docker-compose up -d command.
Related
I'm trying to run specifics commands that would be automatically fired on docker-compose up
I want to avoid all those steps : https://github.com/FLKone/Dodee/tree/php_mysql_slim
(downloading a zip containing the docker-compose.yml + some required default file)
In that example I need a default config file for Nginx.
So now the solution is to download the zip containing both the yml and the config file. But it would be better if the config file was downloaded when the user run docker-compose up (or created by it, to limit network access)
(Maybe the best practice here is to create an installion script to download both the yml and the config file ?)
Thanks
I'm trying to run specifics commands that would be automatically fired on docker-compose up
Use entrypoint in your docker-compose.yml. You can do this per service, so the web container can download/configure nginx conf, the php container can run composer, etc.
I want to Containerize a web application which is a WAR file along with Postgres as database and Tomcat as Server.
What will be the procedure to do that?
I am using the following dockerfile:
FROM tomcat:8-jre8 MAINTAINER lpradel
RUN echo "export JAVA_OPTS=\"-Dapp.env=staging\"" > /usr/local/tomcat/bin/setenv.sh
COPY ./application.war /usr/local/tomcat/webapps/staging.war
CMD ["catalina.sh", "run"]
Write a dockerfile for each application.
E.g: Base a dockerfile on a tomcat server docker image, copy over your warfile and start the tomcat in the cmd part of the dockerfile.
For postgres you should be able to use an existing image.
Updated answer:
The dockerfile you are using is correct. This should prepare a tomcat image which has the web application you want. However you may want to connect that tomcat container with postgress container.
Easier way to connect multiple containers would be to use a docker-compose file. To use docker-compose, refer to docker-compose#build.
Original answer
You can mount your war file in tomcat container at /usr/local/tomcat/webapps/name_of_your_file.war inside container. This will enable the war file to be automatically deployed by tomcat container.
By the way, I am doing the similar process and using mysql database. Taking a look at the my deployment file might be helpful to you.
I have my local docker machine and a remote docker machine, on the cloud. My docker-compose app has a webcontainer with this config:
web:
container_name: web
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
volumes:
- /usr/src/app/static
- ./data:/usr/src/app/data
env_file: .env
command: /usr/local/bin/gunicorn --workers 4 --timeout 120 --bind :8000 app:app
The important part is that second volume. I have this local folder called data with some 10GB of data in it. I made it a volume in the first place because otherwise building the container takes forever. Now that the app is production-ready, I'd like to deploy it. One problem: now my remote web container has an empty data folder mounted in it. So how do I move data from my local machine into a container on a remote docker machine? Where do I even move it to?
It seems like there are two tools for this:
docker cp which doesn't seem like it will work for remote docker machines
docker-machine scp which seems made for this, right?
I'm almost positive I need to use the second of these, but since I don't quite understand how docker machine works or where it keeps its data, I'm not sure what destination path to use:
$ dm scp -r /Users/alex/Documents/Project/data remote-machine:/usr/src/app/data
fails with error message:
scp: /usr/src/app/data: No such file or directory
Where should I be scp'ing this data in order to have it mount properly on my remote web container?
Local path vs. in-container path
Assuming you will use the same model remotely that you used locally, keep in mind that the path /usr/src/app/data is the path inside the container. When you are copying the files from one system to another, you just need to copy them from the current system to the remote system, then put them in a path where docker-compose knows how to find them, to mount into a new container.
So all you have to do is copy them from here to there, and use the same path relative to docker-compose.yml. It only knows your external volume as ./data, so if you put the directory in the same place (from docker-compose's perspective), everything should work the same.
How to copy the files
As for how to do the copy, these are just files, so it doesn't matter. scp -r should work, or make a zipfile, copy that, unzip into the correct place, etc. There are a ton of ways to copy files, so pick whatever is simplest for your case.
What exactly needs to be copied?
In the comments you expressed confusion about local vs. remote operations in docker-machine, and what else you needed to copy. Here's a bit more full of an explanation:
On your local system (which I'm assuming is your own PC or laptop), you have docker-machine installed, and you've been using that for all of this development. Completely separate from that is your new cloud instance where you would like to deploy.
To run what you have locally already, up on your cloud instance, the cloud instance will need to have the following.
The docker-compose.yml file.
As long as you plan to use docker-compose to run this, that must be available.
Your .env file.
Since you are using an environment file in this setup, it must be available or docker-compose can't make use of it.
Your web image.
You have a build parameter for this container, but not an image parameter. So currently the only thing you can do is docker-compose build web which will locally generate an image, which docker-compose then knows how to run.
Another option is to add an image parameter, with a repository:tag, such as myuser/myapp_web:1.0, and push that up to Docker Hub. Then, on your cloud instance, the image can be retrieved from Docker Hub instead of building it locally.
In that case, you can add an image parameter to the web container in docker-compose.yml, then build it and push it up.
docker-compose build web
docker-compose push web
Then on the cloud instance, you can fetch it:
docker-compose pull web
docker-compose will know to use that image because of the image parameter in docker-compose.yml (which is also present on the cloud server).
Ref: Creating a new repository on Docker Hub
Which of these options is preferable depends on how you want to manage things. Either one would work, but the "local build" option would require you copy any required source files to your cloud instance too (anything that is used during the build process).
I don't see in your question where the postgres container comes from. If you are also custom-building this one, then the same goes as for web. If you are using a public image for this, then you shouldn't need to copy anything; docker-compose will know how to fetch it, i.e. you can do this:
docker-compose pull postgres
What about docker cp and docker-machine scp?
You mentioned docker cp and docker-machine scp in your question.
As you already determined, docker cp is not a solution here. That command is for copying files between a container and the host filesystem. It has nothing to do with copying over a network.
As far as I know, docker-machine scp is to copy files between your local host and a docker-machine-managed VM. To copy files to your cloud instance you can likely use a more generic tool like scp or sftp more easily.
Not sure as of which docker version, but contrary to the statements in the question and the #Dan_Lowe answer this works fine:
docker cp ./data container:/usr/src/app/
docker cp is a normal part of the API, so it works like any other command, even remotely.
This is a continuation of my previous question about running a jhipster microservices application on AWS.
I've used docker-machine to create a new VM with Docker installed.
I have setup docker registry, and pushed my images to it, as well as logged into this registry on the AWS-VM.
I attempted to copy the contents of the /docker-composer directory I generated using yo jhipster:docker-compose and attempted to run:
docker-compose up -d
But I receive the error:
ubuntu#aws-test:~/docker-compose$ sudo docker-compose up
Unsupported config option for services service: 'jhipster-registry'
I can manually run the jhipster-registry with docker, but as there are many other underlying services I'd prefer to create a production docker-compose.yml file.
It looks like you're using an older version of docker-compose that doesn't support the V2 format. You need to upgrade to at least 1.6.2 (but 1.7.0 is currently the latest).
Aside of your docker-compose.yml you should have the jhipster-registry.yml and the elk.yml files, if one of those files are not present it won't work because the docker-compose file is looking for those files.
If you want to have all in one file you have to copy the jhipster-registry service in jhipster-registry.yml into your docker-compose.yml.
I am setting up a brand new development environment, nginx+php-fpm and decided to create application containers (using docker) for each service.
Normally I would install nginx and php and modify the configuration (with and editor like vim), reload the services until the services were correctly configured.
To establish a similar procedure starting the initial container and copying the /etc/nginx to the host. Modify the config files in the host and use a docker file (containing another COPY) to test the changes.
Given that the containers are somewhat ephemeral and aren't really meant to contain utilities like vim I was wondering how people set up the initial configuration ?
Once I have a working config I know the options with regards to configuration management for managing the files. It's really the establishment of new containers that I'm curious about.
You can pass configuration through environment variables or mount a host file as a data volume.
Example for nginx configuration:
docker run --name some-nginx -v /yoursystem/nginx.conf:/etc/nginx/nginx.conf:ro -d nginx
You can use many images from Docker Hub as starting point: nginx-php-fmp.