I'm trying to run specifics commands that would be automatically fired on docker-compose up
I want to avoid all those steps : https://github.com/FLKone/Dodee/tree/php_mysql_slim
(downloading a zip containing the docker-compose.yml + some required default file)
In that example I need a default config file for Nginx.
So now the solution is to download the zip containing both the yml and the config file. But it would be better if the config file was downloaded when the user run docker-compose up (or created by it, to limit network access)
(Maybe the best practice here is to create an installion script to download both the yml and the config file ?)
Thanks
I'm trying to run specifics commands that would be automatically fired on docker-compose up
Use entrypoint in your docker-compose.yml. You can do this per service, so the web container can download/configure nginx conf, the php container can run composer, etc.
Related
I am working with Golang application that saves the information inside sqlite file and that resideds inside the data/sqlite.db same directory as docker file. My docker file is something like this
p.s: guys it's my very first docker file please be kind to me :(
FROM golang:1.16.4
ENV GIN_MODE=release
ENV PORT=8081
ADD . /go/src/multisig-svc
WORKDIR /go/src/multisig-svc
RUN go mod download
RUN go build -o bin/multisig-svc cmd/main.go
EXPOSE $PORT
ENTRYPOINT ./bin/multisig-svc
I deployed this application to the Google cloud plateform but somehow the container gets restarted there and after that my db is vanished. So i researched and try to use volumes.
I build the container using this command docker build -t svc . and then run it with docker run -p 8080:8081 -v data:/var/dump -it svc but i can not see the data folder is getting copied to /var/dump directory. My basic idea is , Whenever the container start it loads the db file from dump and then past it to data directory so application can use it and when it exits it copy it back to dump directory. I don't know if i am on right track any help would really be appreciated.
#Edit
The issue is when no request arrives for 15 minutes GPC shut down the container and start it when there comes a request again. Now the issue is to somehow fetch the db file from dump directory update it and write it back to the dump dir when container goes down for future use.
For a local run and if you are running on a VM, you need to specify the absolute path of the directory you want to mount as a bind mount into your directory. In this case something like that should work
docker run -p 8080:8081 -v $(pwd)/data:/var/dump -it svc
When you don't specify the absolute path, the volume you're mounting to your running container is a named volume manage by the docker daemon. And it is not located in a path related to your current working directory. You can find more information about how work docker volumes here https://docs.docker.com/storage/volumes/
However there are multiple environment on GCP (app engine, kubernetes, VMs), so depending on your environment you may need to adapt this solution.
I have a docker setup (php7,mysql and apache). This is working correctly.
However, I have to transfer the project on the server where there is already an apache running.
I was wondering how I could use the apache on the server to connect to my docker setup.
You can use either docker-compose or Dockerfile, or combination of both to use together.
You can read more about them in Docker Compose Docs and Dockerfile Docs.
In Simple Answer:
Create a docker-compose.yml file with contents you need as per above docs with a Dockerfile.
You should connect them together in your code by modifying files, like instead of localhost for database host, you should change it to the name you specify in docker-compose.yml file.
Also copying or adding some files in apache, like /etc/apache2/httpd.conf and /etc/apache2/sites-available/*.conf (all files ending with conf), and for mysql like /var/lib/mysql/ directory (database files), and of course your project files.
Then run docker-compose up -d command.
I have a series of containers created with docker-compose. Some of these containers communicate between each other with some rules defined in the docker-compose.yml file.
I need to move those containers from a serverA to serverB (same OS) but i'm having issues in understanding how this works.
I tried both with the export and the save methods following tutorials i've found on the web but I was not able to get the port configurations and networking rules after the export - import or save - load operations (there's a chance I didn't really get how they work...)
The only way I've found to succesfully do this is to copy the whole docker-compose folder and run docker-compose up in serverB.
The question:
Is there a way to preserve the whole configuration of the containers and move them from a server to another using the export or save function?
Thank you for any help you can provide
2 scenarios:
Copy via ssh
$ sudo docker saveĀ myImage:tag | ssh user#IPhost:/remote/dir docker load -
Copy via scp
#Host A
$ docker save Image > myImage.tar
$ scp myImage.tar IPhostB:/tmp/myImage.tar
# Host B
$ docker load -i /tmp/myImage.tar
And then you need to copy the docker-compose.yml to the host B too.
The containers only have the original build's own configurations, but they don't save the environment that we generate with the file docker-compose.yml
Bye
I want to make some changes to the config file of the VerneMQ image running on docker. Is there any way to reach the config file so that changes could be made?
If you exec into the container docker exec -it <containerID> bash, you'll see that the vernemq.conf file is located under /etc/vermnemq/. Its just the matter of replacing this default conf by your own config file. Keep your vernemq.conf in same directory as where Dockerfile is and then add
following line into Dockerfile
COPY vernemq.conf /etc/vernemq/vernemq.conf
The above line copies your config file into container at given location and replaces the existing one. Finally build the image. For more advanced stuff, do checkout this!
Another approach could be to simply set your options as environment variables for the docker image.
From the official docker hub page:
VerneMQ Configuration
All configuration parameters that are available in vernemq.conf can be
defined using the DOCKER_VERNEMQ prefix followed by the confguration
parameter name. E.g: allow_anonymous=on is -e
"DOCKER_VERNEMQ_ALLOW_ANONYMOUS=on" or
allow_register_during_netsplit=on is -e
"DOCKER_VERNEMQ_ALLOW_REGISTER_DURING_NETSPLIT=on". All available
configuration parameters can be found on
https://vernemq.com/docs/configuration/.
This is especially useful for compose-like yml-based deployments.
You can create a new Dockerfile to modify image contents -
FROM erlio/docker-vernemq
RUN Modify Command
Use the new Dockerfile to build new image & run container using that.
I have my local docker machine and a remote docker machine, on the cloud. My docker-compose app has a webcontainer with this config:
web:
container_name: web
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
volumes:
- /usr/src/app/static
- ./data:/usr/src/app/data
env_file: .env
command: /usr/local/bin/gunicorn --workers 4 --timeout 120 --bind :8000 app:app
The important part is that second volume. I have this local folder called data with some 10GB of data in it. I made it a volume in the first place because otherwise building the container takes forever. Now that the app is production-ready, I'd like to deploy it. One problem: now my remote web container has an empty data folder mounted in it. So how do I move data from my local machine into a container on a remote docker machine? Where do I even move it to?
It seems like there are two tools for this:
docker cp which doesn't seem like it will work for remote docker machines
docker-machine scp which seems made for this, right?
I'm almost positive I need to use the second of these, but since I don't quite understand how docker machine works or where it keeps its data, I'm not sure what destination path to use:
$ dm scp -r /Users/alex/Documents/Project/data remote-machine:/usr/src/app/data
fails with error message:
scp: /usr/src/app/data: No such file or directory
Where should I be scp'ing this data in order to have it mount properly on my remote web container?
Local path vs. in-container path
Assuming you will use the same model remotely that you used locally, keep in mind that the path /usr/src/app/data is the path inside the container. When you are copying the files from one system to another, you just need to copy them from the current system to the remote system, then put them in a path where docker-compose knows how to find them, to mount into a new container.
So all you have to do is copy them from here to there, and use the same path relative to docker-compose.yml. It only knows your external volume as ./data, so if you put the directory in the same place (from docker-compose's perspective), everything should work the same.
How to copy the files
As for how to do the copy, these are just files, so it doesn't matter. scp -r should work, or make a zipfile, copy that, unzip into the correct place, etc. There are a ton of ways to copy files, so pick whatever is simplest for your case.
What exactly needs to be copied?
In the comments you expressed confusion about local vs. remote operations in docker-machine, and what else you needed to copy. Here's a bit more full of an explanation:
On your local system (which I'm assuming is your own PC or laptop), you have docker-machine installed, and you've been using that for all of this development. Completely separate from that is your new cloud instance where you would like to deploy.
To run what you have locally already, up on your cloud instance, the cloud instance will need to have the following.
The docker-compose.yml file.
As long as you plan to use docker-compose to run this, that must be available.
Your .env file.
Since you are using an environment file in this setup, it must be available or docker-compose can't make use of it.
Your web image.
You have a build parameter for this container, but not an image parameter. So currently the only thing you can do is docker-compose build web which will locally generate an image, which docker-compose then knows how to run.
Another option is to add an image parameter, with a repository:tag, such as myuser/myapp_web:1.0, and push that up to Docker Hub. Then, on your cloud instance, the image can be retrieved from Docker Hub instead of building it locally.
In that case, you can add an image parameter to the web container in docker-compose.yml, then build it and push it up.
docker-compose build web
docker-compose push web
Then on the cloud instance, you can fetch it:
docker-compose pull web
docker-compose will know to use that image because of the image parameter in docker-compose.yml (which is also present on the cloud server).
Ref: Creating a new repository on Docker Hub
Which of these options is preferable depends on how you want to manage things. Either one would work, but the "local build" option would require you copy any required source files to your cloud instance too (anything that is used during the build process).
I don't see in your question where the postgres container comes from. If you are also custom-building this one, then the same goes as for web. If you are using a public image for this, then you shouldn't need to copy anything; docker-compose will know how to fetch it, i.e. you can do this:
docker-compose pull postgres
What about docker cp and docker-machine scp?
You mentioned docker cp and docker-machine scp in your question.
As you already determined, docker cp is not a solution here. That command is for copying files between a container and the host filesystem. It has nothing to do with copying over a network.
As far as I know, docker-machine scp is to copy files between your local host and a docker-machine-managed VM. To copy files to your cloud instance you can likely use a more generic tool like scp or sftp more easily.
Not sure as of which docker version, but contrary to the statements in the question and the #Dan_Lowe answer this works fine:
docker cp ./data container:/usr/src/app/
docker cp is a normal part of the API, so it works like any other command, even remotely.