I've created PrestaShop store on server. Is there any possible way to use docker for my store and migrate it into another server using docker? I know that I'll need docker-compose but to be honest I don't know what to do with files on current server.
Ok, so I deeped into problem and solution for ma quesstion is as below. What I did is pull original image from prestashop and copy there my files.
Next step was use mariadb image. I had backup.sql file exported from previous store phpmyadmin
version: '2'
services:
prestashop:
image: prestashop
ports:
- 80:80
links:
- mariadb:mariadb
depends_on:
- mariadb
volumes:
- ./src:/var/www/html
- ./src/modules:/var/www/html/modules
- ./src/themes:/var/www/html/themes
- ./src/override:/var/www/html/override
environment:
- PS_DEV_MODE=1
- DB_SERVER=mariadb
- DB_USER=root
- DB_PASSWD=root
- DB_NAME=prestashop
- PS_INSTALL_AUTO=0
mariadb:
image: mariadb
volumes:
- backup.sql:/docker-entrypoint-initdb.d
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=prestashop
phpmyadmin:
image: phpmyadmin/phpmyadmin
links:
- mariadb
ports:
- 81:80
environment:
- PMA_HOST=mariadb
- PMA_USER=root
- PMA_PASSWORD=root
The biggest issue is IP in docker-machine. Keep in mind that if you are using docker toolbox you have IP 192.168.99.100 but in Docker for Windows your IP depends on localhost (or just type localhost).
You can use this docker-compose.yml :
version: "3"
services:
prestashop:
image: prestashop/prestashop
networks:
mycustomnetwork:
ports:
- 82:80
links:
- mariadb:mariadb
depends_on:
- mariadb
volumes:
- ./src:/var/www/html
- ./src/modules:/var/www/html/modules
- ./src/themes:/var/www/html/themes
- ./src/override:/var/www/html/override
environment:
- PS_DEV_MODE=1
- DB_SERVER=mariadb
- DB_USER=root
- DB_PASSWD=mycustompassword
- DB_NAME=prestashop
- PS_INSTALL_AUTO=0
mariadb:
image: mariadb
networks:
mycustomnetwork:
volumes:
- presta_db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=mycustompassword
- MYSQL_DATABASE=prestashop
phpmyadmin:
image: phpmyadmin/phpmyadmin
networks:
mycustomnetwork:
links:
- mariadb:mariadb
ports:
- 1235:80
depends_on:
- mariadb
environment:
- PMA_HOST=mariadb
- PMA_USER=root
- PMA_PASSWORD=mycustompassword
volumes:
presta_db:
networks:
mycustomnetwork:
external: true
Replace mycustomnetwork and mycustompassword
Then run docker-compose up
Web url : localhost:82
PHP MyAdmin url : localhost:1235
You can follow this tutorial to setup Prestashop in a Docker environment.
https://hub.docker.com/r/prestashop/prestashop/
You will need to add your current files to the Prestashop container and most likely import your database in a MySQL container. Docker-compose will be used to launch those containers together. Once this is done, you will be able to deploy the whole thing anywhere.
You should also include bridge network in your compose file, some examples might work from here https://runnable.com/docker/docker-compose-networking.
This way db can be configured to be accessed only by prestashop on local docker network without being exposed outside. Presta db can also be pointed to the name of the running image, in case your IP changes or something. All what you would leave running is port 80 on the app.
Related
I have a docker-compose.yml on VPS server root
version: '3'
services:
mysql:
image: mariadb:10.3.17
command: --max_allowed_packet=256M --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
volumes:
- "./data/db:/var/lib/mysql:delegated"
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
restart: always
litespeed:
image: litespeedtech/litespeed:${LSWS_VERSION}-${PHP_VERSION}
env_file:
- .env
volumes:
- ./lsws/conf:/usr/local/lsws/conf
- ./lsws/admin/conf:/usr/local/lsws/admin/conf
- ./bin/container:/usr/local/bin
- ./sites:/var/www/vhosts/
- ./acme:/root/.acme.sh/
- ./logs:/usr/local/lsws/logs/
ports:
- 80:80
- 443:443
- 443:443/udp
- 7080:7080
restart: always
environment:
TZ: ${TimeZone}
phpmyadmin:
image: bitnami/phpmyadmin:5.0.2-debian-10-r72
ports:
- 8080:80
- 8443:443
environment:
DATABASE_HOST: mysql
restart: always
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.1
environment:
- discovery.type=single-node
ports:
- 9200:9200
volumes:
- esdata:/usr/share/elasticsearch/data
restart: always
volumes:
esdata:
it has server configuration in above code, should i write my configuration related to magneto 2 in same file, shown below
version: '3'
services:
web:
image: webdevops/php-apache-dev:ubuntu-16.04
container_name: web
restart: always
user: application
environment:
- WEB_ALIAS_DOMAIN=local.domain.com
- WEB_DOCUMENT_ROOT=/app/pub
- PHP_DATE_TIMEZONE=EST
- PHP_DISPLAY_ERRORS=1
- PHP_MEMORY_LIMIT=2048M
- PHP_MAX_EXECUTION_TIME=300
- PHP_POST_MAX_SIZE=500M
- PHP_UPLOAD_MAX_FILESIZE=1024M
volumes:
- /path/to/magento:/app:cached
ports:
- "80:80"
- "443:443"
- "32823:22"
links:
- mysql
mysql:
image: mariadb:10
container_name: mysql
restart: always
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=magento
volumes:
- db-data:/var/lib/mysql
phpmyadmin:
container_name: phpmyadmin
restart: always
image: phpmyadmin/phpmyadmin:latest
environment:
- MYSQL_ROOT_PASSWORD=root
- PMA_USER=root
- PMA_PASSWORD=root
ports:
- "8080:80"
links:
- mysql:db
depends_on:
- mysql
volumes:
db-data:
external: false
if no then what should be be scenario?
1- should i create new docker-compose-magento.yml on root or inside magento folder?
2- if i write docker-compose.yml inside magento folder then how can i connect it with my server root docker folder so that i can use elasticsearch also.
First, you need to know what application is running using the existing docker-compose file. And that you could check inside the existing virtual host configuration file. And that you could find inside the "sites" directory that is mapped to the lightspeed web server virtual host path that is "/var/www/vhosts" in volume mapping.
If any application is running using that docker-compose file for sure then you have to create a separate docker-compose for running Magento. In this case, a separate docker network will be created for all the Magento 2 docker-compose services and you could not access a service(ElasticSearch) on another network(on a separate docker-compose). You have to implement ES on Magento 2 docker-compose as well.
If nothing is running on the existing docker-compose then you could merge both the docker-compose files as per your requirement and understanding. Or you could apply only your new Magento 2 docker-compose file.
So the main thing here is the usage of two different networks. And docker containers can only talk to another container in the same network.
Also, lightspeed is a web server that uses the same port numbers as in the case of Apache(webdevops/php-apache-dev:ubuntu-16.04). So there will be a port conflict if you create a new docker-compose and try to run both simultaneously. So you need to manage that as well by using different host ports. If this is a production server then that is not possible cause people are not going to access web URLs using non-default port numbers.
The solution for this is Kubernetes, where you can run multiple applications all using the same public ports but with no conflict as in Kubernetes you will divide your single physical server machine into multiple virtual machines and hence no port conflicts.
See this article for Kubernetes setup https://technicallysound.in/how-to-setup-a-static-site-on-kubernetes/
See this article for Magento setup on Docker https://technicallysound.in/how-to-setup-magento-2-on-docker-for-development/
there is ruby on rails application which uses mongodb and postgresql databases. When I run it locally everything works fine, however when I try to open in a remote container, it throws error message
2021-03-14T20:22:27.985+0000 Failed: error connecting to db server: no reachable servers
the docker-compose.yml file defines following services:
redis mongodb db rails
I start remote containers with following command:
docker-compose build - build successful
docker-compose up -d - containers are up and running
when I connect to the rails container and try to do
bundle exec rake aws:restore_db
error mentioned above is thrown. I don't know what is wrong here. The mongodb container is up and running.
the docker-compose.yml is mentioned below:
version: '3.4'
services:
redis:
image: redis:5.0.5
mongodb:
image: mongo:3.6.13
volumes:
- mongo-data:/data/db
db:
image: postgres:11.3
volumes:
- db-data:/var/lib/postgresql/data
rails:
build: .
image: proj:latest
depends_on:
- db
- mongodb
- redis
volumes:
- .:/proj
ports:
- "3000:3000"
tty: true
stdin_open: true
env_file:
- .env/development.env
volumes:
db-data:
mongo-data:
this is how I start all four remote containers:
$ docker-compose up -d
Starting proj_db_1 ... done
Starting proj_redis_1 ... done
Starting proj_mongodb_1 ... done
Starting proj_rails_1 ... done
please help me to understand how remote containers should interact with each other.
Your configuration should point to the services by name and not to a port on localhost. For example, if you ware connecting to redis as localhost:6380 or 127.0.0.1:6380, now you need to use redis:6380
If this is still not helping, you can try to add links between containers in order the names given to them as services to be resolved. So the file will look something like this:
version: '3.4'
services:
redis:
image: redis:5.0.5
networks:
- front-end
links:
- "mongodb:mongodb"
- "db:db"
- "rails:rails"
mongodb:
image: mongo:3.6.13
volumes:
- mongo-data:/data/db
networks:
- front-end
links:
- "redis:redis"
- "db:db"
- "rails:rails"
db:
image: postgres:11.3
volumes:
- db-data:/var/lib/postgresql/data
networks:
- front-end
links:
- "redis:redis"
- "mongodb:mongodb"
- "rails:rails"
rails:
build: .
image: proj:latest
depends_on:
- db
- mongodb
- redis
volumes:
- .:/proj
ports:
- "3000:3000"
tty: true
stdin_open: true
env_file:
- .env/development.env
networks:
- front-end
links:
- "redis:redis"
- "mongodb:mongodb"
- "db:db"
volumes:
db-data:
mongo-data:
networks:
front-end:
The links will allow for a hostnames to be defined in the containers.
The link flag is legacy, and in new versions of docker-engine it's not required for user defined networks. Also, the links will be ignored in case of docker swarm deployment. However since there are sill old installations of Docker and docker-compose, this is one thing to try in troubleshooting.
I have laravel app that lives in docker, and I want to integrate elasticsearch to my app
That is how my docker-compose.yaml looks
version: '3'
services:
laravel:
build: ./docker/build
container_name: laravel
restart: unless-stopped
privileged: true
ports:
- 8084:80
- "22:22"
volumes:
- ./docker/settings:/settings
- ../2agsapp:/var/www/html
# - vendor:/var/www/html/vendor
- ./docker/temp:/backup
- composer_cache:/root/.composer/cache
environment:
- ENABLE_XDEBUG=true
links:
- mysql
mysql:
image: mariadb:10.2
container_name: mysql
volumes:
- ./docker/db_config:/etc/mysql/conf.d
- ./db:/var/lib/mysql
ports:
- "8989:3306"
environment:
- MYSQL_USER=dev
- MYSQL_PASSWORD=dev
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=laravel
command: --innodb_use_native_aio=0
phpmyadmin:
container_name: pma_laravel
image: phpmyadmin/phpmyadmin:latest
environment:
- MYSQL_USER=dev
- MYSQL_ROOT_PASSWORD=root
- MYSQL_PASSWORD=dev
- MYSQL_DATABASE=laravel
- PMA_HOST=mysql
ports:
- 8083:80
links:
- mysql
es:
image: docker.elastic.co/elasticsearch/elasticsearch:7.10.1
ports:
- "9200:9200"
- "9300:9300"
environment:
- discovery.type=single-node
volumes:
storage:
composer_cache:
I run docker-compose up -d and then got really strange issue
If I execute curl localhost:9200 inside laravel container it returns this message Failed to connect to localhost port 9200: Connection refused
But if I wull run curl localhost:9200 out of the docker it returns expected response
Maybe I don't understand how it works, hope someone will help me
when you want to access another container within some container you should use the container name, not localhost.
If you are inside laravel and want to access Elasticsearch you should:
curl es:9200
Since you mapped the 9200 port to localhost (ports section in docker-compose) this port is available from your local machine as well, that's why curling from local machine to 9200 works.
I'm trying to startup zabbix in docker, I've created the docker-compose with several services, one is the database. I need start the database first , and after get the ip address from database to setup the other services, but do not know how to do, is already trying to use the links, but without success.
This is my docker-compose.yml
version: "2"
services:
mysql-zabbix :
image: "mysql:5.7"
ports:
- "53306:3306"
networks:
- net_zabbix
volumes:
- "vol_db_zabbix:/var/lib/mysql"
environment:
- "MYSQL_ROOT_PASSWORD=abcd"
- "MYSQL_DATABASE=zabbix"
- "MYSQL_USER=zabbix"
- "MYSQL_PASSWORD=123456"
zabbix-server:
image: "zabbix/zabbix-server-mysql:alpine-3.4.11"
ports:
- "10051:10051"
networks:
- net_zabbix
environment:
- "DB_SERVER_PORT=53306"
- DB_SERVER_HOST=zabbix.db
- "MYSQL_USER=zabbix"
- "MYSQL_PASSWORD=123456"
depends_on:
- mysql-zabbix
external_links:
- mysql-zabbix:zabbix.db
zabbix-web:
image: "zabbix/zabbix-web-apache-mysql:alpine-3.4.11"
ports:
- "80:80"
networks:
- net_zabbix
environment:
- DB_SERVER_HOST=zabbix.db
- "DB_SERVER_PORT=53306"
- "MYSQL_USER=zabbix"
- "MYSQL_PASSWORD=123456"
- ZBX_SERVER_HOST=zabbix.server
- "PHP_TZ=America/Sao_Paulo"
depends_on:
- zabbix-server
external_links:
- mysql-zabbix:zabbix.db
- zabbix-server:zabbix.server
zabbix-agent:
image: "zabbix/zabbix-agent:alpine-3.4.11"
ports:
- "10050:10050"
networks:
- net_zabbix
environment:
- "ZBX_HOSTNAME=demo_zabbix"
- ZBX_SERVER_HOST=zabbix.server
external_links:
- zabbix-server:zabbix.server
zabbix-proxy:
image: "zabbix/zabbix-proxy-sqlite3:alpine-3.4.11"
ports:
- "10053:10050"
networks:
- net_zabbix
environment:
- "ZBX_HOSTNAME=demo_zabbix"
- ZBX_SERVER_HOST=zabbix.server
external_links:
- zabbix-server:zabbix.server
networks:
net_zabbix:
volumes:
vol_db_zabbix:
Docker Compose will implicitly create a private network for you, and once it's created that private network, Docker provides a DNS service that will resolve container names to IP addresses. (The explicit networks: declaration isn't harmful and has the same effect.) You can refer to a container by its name and by additional aliases. Docker Compose will register aliases to reach each container under its key in the docker-compose.yml file.
All of this means that you can use the other container names as the values of the various *_HOST environment variables. Note that the ports used are the container internal ports; if you're connecting to a service port that's also being published to the host, it's the port on the right side of the colon.
In your example, you should specify (for different containers as appropriate):
environment:
- DB_SERVER_HOST=mysql-zabbix
- DB_SERVER_PORT=3306
- ZBX_SERVER_HOST=zabbix-server
You do not need to specify links of any sort. depends_on is strictly optional, but if you run docker-compose up zabbix-web, it will also start the things it depends on.
Today I switched from "Docker Toolbox" to "Docker for Mac", because Docker now has finally write-access to my User directory (which doesn't worked with "Docker Toolbox") - Yay!
But this change also includes that all containers now running under my localhost and not under Docker's IP as before (e.g. 192.168.99.100).
Since my localhost listens to various ports by default (80, 443, ...) and I don't want to always add new created ports, that doesn't conflict with the standard one's, to my local dev domains (e.g. example.dev:8443), I wonder how to run my containers as before.
I read about network configs and tried a lot of things (creating a new host network, exposing ports with an IP in front of it, ...), but didn't got it working.
What kind of config do I need to run my app container with the IP 192.168.99.100? Thats my docker-compose.yml so far.
version: '2'
services:
app:
build:
context: .
dockerfile: Dockerfile
depends_on:
- mysql
- redis
- memcached
ports:
- 80:80
- 443:443
- 22:22
- 3000:3000
- 3001:3001
volumes:
- ./app/:/app/
- /tmp/debug/:/tmp/debug/
- ./:/docker/
volumes_from:
- storage
# cap and privileged needed for slowlog
cap_add:
- SYS_PTRACE
privileged: true
env_file:
- etc/environment.yml
- etc/environment.development.yml
mysql:
build:
context: docker/mysql/
dockerfile: MariaDB-10
ports:
- 3306:3306
volumes_from:
- storage
volumes:
- ./data/mysql:/var/lib/mysql
- /tmp/debug/:/tmp/debug/
env_file:
- etc/environment.yml
- etc/environment.development.yml
redis:
build: docker/redis/
volumes_from:
- storage
env_file:
- etc/environment.yml
- etc/environment.development.yml
memcached:
build: docker/memcached/
volumes_from:
- storage
env_file:
- etc/environment.yml
- etc/environment.development.yml
storage:
build: docker/storage/
volumes:
- /storage
You need to declare "networks:" for each of your services:
e.g.
version: '2'
services:
app:
image: xxxx:xxx
ports:
- "80:80"
networks:
- my-network
mysql:
image: xxxx:xxx
networks:
- my-network
networks:
my-network:
driver: bridge
Then from side your app configuration, you can use "mysql" as the hostname of database server.
You can define a network in your compose file, then add any services to the network.
https://docs.docker.com/compose/networking/
But I would suggest you just use different ports now that you are running natively. I.e. 8080:80