I'm trying to run Magento 2 locally, so I built built a Docker LAMP stack using docker compose. This is the docker-compose.yml portion related to the php container:
php:
image: 'docker.io/bitnami/php-fpm:7.4-debian-10'
ports:
- 9001:9000
volumes:
- './docker/php/php.ini:/opt/bitnami/php/etc/conf.d/custom.ini'
networks:
- 'web'
and it works great. The point is that the bitnami image doesn't seem to have cron pre-installed, and for the sake of simplicity it's probably easier to have it directly in the php container (so I can reach it through the Magento cli functionalities, e.g. bin/magento cron:install)
So, I changed the php instruction into the docker-compose file from image to build, added a separated Dockerfile written like this:
FROM docker.io/bitnami/php-fpm:7.4-debian-10
I didn't even add the RUN instructions for adding packages. What I'm expecting at the moment is that by running docker-compose up -d --remove-orphan the result is basically the same. But it's not, because refreshing the homepage now gives me a 503 error that doesn't seem to leave any trace into the log files, so I'm a bit stuck.
Related
I have a golang script which interacts with postgres. Created a service in docker-compose.yml for both golang and postgre. When I run it locally with "docker-compose up" it works perfect, but now I want to create one single image to push it to my dockerhub so it can be pulled and ran with just "docker run ". What is the correct way of doing it?
Image created by "docker-compose up --build" launches with no error with "docker run " but immediately stops.
docker-compose.yml:
version: '3.6'
services:
go:
container_name: backend
build: ./
volumes:
- # some paths
command: go run ./src/main.go
working_dir: $GOPATH/src/workflow/project
environment: #some env variables
ports:
- "80:80"
db:
image: postgres
environment: #some env variables
volumes:
- # some paths
ports:
- "5432:5432"
Dockerfile:
FROM golang:latest
WORKDIR $GOPATH/src/workflow/project
CMD ["/bin/bash"]
I am a newbie with docker so any comments on how to do things idiomatically are appreciated
docker-compose does not combine docker images into one, it runs (with up) or builds then runs (with up --build) docker containers based on the images defined in the yml file.
More info are in the official docs
Compose is a tool for defining and running multi-container Docker applications.
so, in your example, docker-compose will run two containers:
1 - based on the go configurations
2 - based on the db configurations
to see what containers are actually running, use the command:
docker ps -a
for more info see docker docs
It is always recommended to run each searvice on a separate container, but if you insist to make an image which has both golangand postrges, you can take a postgres base image and install golang on it, or the other way around, take golangbased image and install postgres on it.
The installation steps can be done inside the Dockerfile, please refer to:
- postgres official Dockerfile
- golang official Dockerfile
combine them to get both.
Edit: (digital ocean deployment)
Well, if you copy every thing (docker images and the yml file) to your droplet, it should bring the application up and running similar to what happens when you do the same on your local machine.
An example can be found here: How To Deploy a Go Web Application with Docker and Nginx on Ubuntu 18.04
In production, usually for large scale/traffic applications, more advanced solutions are used such as:
- docker swarm
- kubernetes
For more info on Kubernetes on digital ocean, please refer to the official docs
hope this helps you find your way.
It's been a few days since I've been trying to get docker container up and running, and always something goes wrong.
I need (mostly) LAMP stack, only instead MySQL -> mongoDb.
Of course I started by looking on docker hub and trying to compose some image from others. Googled after configs. Simplest one couldn't go past the stage of setting MONGODB_ADMIN_USER and MONGODB_ADMIN_PASSWORD and always returned with code 1, though mentioned variables were set in yml.
I tried to start with just centos/mongodb image, install apache, php and whatnot, commit it, and work on my own image, but without kernel it's hard to properly install and run apache within docker container.
So i tried once more, found promising project here: https://github.com/akhomy/docker-compose-lamp
but can't attach to the container, can't run localhost with default settings, though apparently composing stage goes ok.
Has anyone of You, by chance, working set of docker files / docker-compose?
Or some helpful hint? Really, looks like a straightforward task, take two images from docker hub, make docker-compose.yml, run docker-compose up, case closed. I can't wrap my head around this :|
Docker approach is not to put all services in one container but to have a single container for a single service. All Docker tools are aligned to this.
For your LAMP stack to start, you just have to download docker-compose, create docker-compose.yml file with 3 services defined and run docker-compose up
Docker compose is an orchestrating tool for containers, suited for single machine.
You need to have at least small tour over this tool, just for an example I provide sample config file:
docker-compose.yml
version: '3'
services:
apache:
image: bitnami/apache:latest
.. here goes apache config ...
db:
image: mongo
.. here goes apache config ...
php:
image: php
.. here goes php config ...
After you start this with docker-compose up you will get network created automatically for you and all services will join it. They will see each other under their names (lets say to connect to database from php you will use db as host name).
To connect to this stuff from host PC, you will need to expose ports explicitly.
the situation is this:
I have three different dockers compose files for three different projects: a frontend, a middleware, and a backend. FE is Ember, middleware and backend spring (boot). Which should not matter here though. Middleware uses an external_link to the backend, and frontend (UI) is using an external_link to middleware.
When I start with a clean docker (docker stop $(docker ps -aq), docker rm $(docker ps -aq)), everything works fine: I start the backend with docker-compose up, then middleware, then frontend. Everything is nice all external links do work (also running Cypress e2e tests on this setup - works fine).
Now, when I change something in the middleware, rebuild the image, stop the container (control+c) and restart it using docker-compose up, and then try to restart the frontend (control + c and then docker-compose up), docker will tell me:
Starting UI ... error
ERROR: for UI Cannot start service ui: Cannot link to a non running container: /32f2db8e96a1_middleware AS /ui/backend
ERROR: for UI Cannot start service ui: Cannot link to a non running container: /32f2db8e96a1_middleware AS /ui/backend
ERROR: Encountered errors while bringing up the project.
Now what irritates me:
where is the "32f2db8e96a1" coming from? The middleware container name is set to "middleware", which is also used in the external link of the UI, and works fine for every clean startup (meaning, using docker rm "-all" before). Also, docker ps shows me that a container for the middleware is actually running.
Unfortunately, I cannot post the compose files here, but I am willing to add any info needed.
Running on Docker version 18.09.0, build 4d60db4
Ubuntu 18.04.1 LTS
I would like to restart any of these containers without a broken external link. How do I achieve this?
Since you guys take time for me, I took time to clear out two of the compose. This is the UI/frontend one:
files:
version: '2.1'
services:
ui:
container_name: x-ui
build:
dockerfile: Dockerfile
context: .
image: "xxx/ui:latest"
external_links:
- "middleware:backend"
ports:
- "127.0.0.1:4200:80"
network_mode: bridge
This is the middleware:
version: '2.1'
services:
middleware:
container_name: x-middleware
image: xxx/middleware:latest
build:
dockerfile: src/main/docker/middleware/Dockerfile
context: .
ports:
- "127.0.0.1:8080:8080"
- "127.0.0.1:9003:9000"
external_links:
- "api"
network_mode: "bridge"
The "api" one is essentially the same as middleware.
Please note: I removed volumes and environment. Also I renamed, so that the error message names will not fit perfectly. Please note the naming schema is the same: service name goes like "middleware", container name uses a prefix "x-middleware".
I am attempting to get a Docker container running PHP7, with a specific volume location and specific port 55211:80
When I add the following code to my docker-compose.yml file and compose up, the process is successful.
phpsandbox:
container_name: php_sandbox
restart: always
image: php:7
ports:
- "55211:80"
volumes:
- ./phpsandbox:/var/www/html/
I can see that my volume directory exists with my index.php inside...But if I then go to localhost:55211 in my browser, the browser says...
This page isn’t working localhost didn’t send any data.
What am I doing wrong in this part of my docker-compose.yml file?
UPDATE
From powershell, if I type in docker ps -a to see all running containers, php_sandbox is continosuly restarting, so I KNOW that something is wrong with this piece of code in my docker-compose.yml file, but don't know what...
Thanks!
The php:7 image only includes command line php tools. The image does not include a web server, so there is nothing on port 80 to respond to requests.
Try the php:7-apache image which comes with a preconfigured Apache httpd 2 web server that should work with your compose config.
I'm having issues calling what is supposed to have been defined in some Docker Compose services from my "main" (web) service. I have the following docker-compose.yml file:
version: '2'
services:
db:
image: postgres
volumes:
- postgres-db-volume:/data/postgres
pdftk:
image: mnuessler/pdftk
volumes:
- /tmp/manager:/work
ffmpeg:
image: jrottenberg/ffmpeg
volumes:
- /tmp/manager:/files
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
- /tmp/manager:/files
ports:
- "8000:8000"
depends_on:
- db
- pdftk
- ffmpeg
volumes:
postgres-db-volume:
I'm able to use db from web perfectly, but unfortunately not pdftk or ffmpeg (these are just command-line utilities that are undefined when I run web's shell):
manager$ docker-compose run web bash
Starting manager_ffmpeg_1
Starting manager_pdftk_1
root#16e4b755172d:/code# pdftk
bash: pdftk: command not found
root#16e4b755172d:/code# ffmpeg
bash: ffmpeg: command not found
How can I get pdftk and ffmpeg to be defined within the web service? Is depends_on not the appropriate directive? Should I be extending web's Dockerfile to call an entry-point script that installs the content found in the other two services (even though this'd seem counterproductive)?
Tried to remove and rebuild the web service after adding pdftk and ffmpeg, but that didn't solve it.
What can I do?
Thank you!
Looks like an misunderstanding of "depends_on". It is used to set a starting order for containers.
For example: Start Database before Webserver etc.
If you want access to tools, installed in other containers, you have to open an ssh connection for example:
ssh pdftk <your command>
But i would install the nessecary tools into the web container image.
Extend the Dockerfile to install both tools should do the trick.
To access the "tools" you do not need to install SSH, this is most probably pretty complicated and not wanted. The containers are not "merged into one" when using depends_on.
Depends_on is even less then starting order, its more "ruff container start order. E.g. eventhough app depends on DB, it will happen, that the db container process did not yet get fully started while app has already been started - depends_on right now is, in most cases, rather used to notify when a container needs re-initialization when a other container he depends on e.g. does get recreated.
Other then that, start your containers and mount the docker-socket into them. Add this:
services:
web:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Now, on the webserver, where you need docker to be installed, you can do:
docker exec pdftk <thecommand>
Thats the usual way to run commands on services.
You can of course use http/api based implementations, in this case, you do not need to expose any ports or mount the socket, more, you can access the services using their service-name
ping pdftk or ping ffmpeg
Edit: The described method below, does not work for the OP's question. Still leaving it here as educational information.
Besides the options described by #opHASnoNAME ... you could try declaring a container-volume for pdftk and ffmpeg and use the binaries directly, like so:
ffmpeg:
volumes:
- /usr/local/bin
and mount this on your web container:
web:
volumes_from:
- ffmpeg
Please note that this approach has some limitations:
the path /usr/local/bin mounted from ffmpeg should not exist in web, otherwise you might need to mount the files, only.
in web, /usr/local/bin must be in your $PATH.
since this is kinda hotlinking of binaries, this might fail due to different Linux versions, missing shared libraries, etc... - so it actually works only for standalone binaries
all containers using volumes and volumes_from have to be deployed on the same host
But I am still using this here and there, i.e. with the docker or docker-compose binaries.