I'm new to Docker, using it only for local development, and I'm trying to move from MAMP Pro setup to containers for the portability and ease of setup.
One main problem I can't seem to figure out is how to have multiple databases inside my localhost connection like seen here in Navicat:
In this example db1 is used by project1, and db2 and db3 are both used by another project2
If I try to make containers in Docker for these projects I have to create a different localhost connection for each one (each with different ports), different volumes (probably) and keep track for the settings for each new database that I need.
Is this pattern of single-connection/multiple-databases doable in Docker?
The closest I've seen is using a reverse proxy so that everything points to a single port, but I couldn't make it work with multiple projects. :/
Am I missing something obvious?
Related
I installed the docker version of AzerothCore with the chromiecraft instructions and it seems to run buttery smooth.
That said, I don't seem to be able to access the databases with SQLyog or HeidiSQL.
How else can I access auth and world tables?
I am familiar with using these tools to open the databases with other projects.
Sorry if this seems basic to others. It does not seem so to me.
Thanks in advance for any help! I'd like to do things such as update realmlist table and export characters so they can survive db updates.
:)
In Docker, usually, different services are isolated, each one in its container, which means that your server should be on a different container than your database.
I have no idea what is chromiecraft, nor I have ever used AzerothCore, but I've checked their guide and they are using docker-compose to launch an array of containers (3 in total), one for each service (auth server, world server, database).
If you followed this guide, you can see that the port for the database is exposed, and is as follows:
So, the address would be localhost:3306 (if you're running it inside your computer, either way replace localhost with your IP), username: root, password: password.
I’ve spent months building an application and now I’m looking to deploy it, but I’m new to Docker and I seem to have brain block when it comes to actually containerizing my application. I need to run the following technologies:
php 7.2
mysql 5.7
apache 2.4
phpMyAdmin 4.7
My application will need to be available exclusively through https and I’m assuming the connection between my application and the mysql container will also need to be through a secure port.
In addition to that I have a wordpress site that will serve as the pre-login experience for my application that I’d like to dockerize, but should not share the same DB. When I move this to a prod environment, I will not include the phpMyAdmin container.
How many containers do I need? I was thinking that I would need at least 5:
apache
php
mysql (my application)
mysql (wordpress)
phpmyAdmin
Should my application and the worpress site live in the php container? or should I create separate containers for each.
What should my docker-compose.yml file and dockerfiles look like to achieve this feat?
The driving idea here is that a container should contain a single "service". You don't break things into containers by software component (php, apache, etc.) but rather by whatever needs to be combined to create a single service. So if your application is a PHP application hosted by Apache, then you'd want a container for your application that contained PHP, Apache and your application code. That would provide your application as a service.
Same goes for Wordpress. If Wordpress is running behind Apache and needs PHP, you'd create a second container containing PHP, Apache, WordPress, and your WordPress content, producing your "Wordpress service".
Each of your individual databases can be seen as a service, so you might want two containers running MySQL, one serving each of your databases. You could choose to consider the database server as a whole to be a service, and have it serve both of your databases. Then you could get away with a single MySQL container. Which way you go with this is a minor issue. Having a single database server will likely save a little bit of resources by avoiding some duplication.
If all of your services need to talk to each other, the easiest way to do this with Docker is to use Docker Compose. This lets you create multiple containers that know about each other and can communicate very easily between each other by way of some simple DNS logic that Docker Compose provides. With Compose, you give each of your containers a simple name, and then that name can be looked up via DNS to provide the IP address of each container. So for example, if your MySql container was named "mysql", your app container could connect to it via the DNS address "mysql" with no additional work on your part.
I would like to run several instances of a multi-container application at the same time using the same compose file. One of the containers in the application accepts websockets on a certain port.
I have an nginx proxy to forward different domains or locations to different instances of the application. The instances are actually different tenants using the application.
I would like to simply be able to run:
docker stack deploy -c docker-stack.yml tenant1
docker stack deploy -c docker-stack.yml tenant2
And somehow get different ports to the apps, which I then can use in the proxy to forward different websocket connections to different application instances, either using locations or virtual hosts.
So either:
ws://tenant1.mydomain.com
or
ws://mydomain.com/tenant1
How to configure the proxy to do this can surely be figured out. I've started to read a bit about: https://github.com/jwilder/nginx-proxy, which seems nice. However it requires that I set the virtual host name as environment variable for each app-instance and I can't seem to find a way to pass arguments with my docker stack deploy command?
Ideally I would like to not care about exact ports, they would rather be random. But they need to somehow be known to the nginx proxy to be able to forward. I want to easily be able to spin up a new appinstance (tenant) stack and just set up the proxy for that name (or even better if the proxy can handle that automatically with the naming of the app).
Bonus if both examples above works (both virtual host and location) since that would make it possible to test and develop without making subdomains / new domains.
Suggestions?
So, here is the problem, I need to do some development and for that I need following packages:
MongoDb
NodeJs
Nginx
RabbitMq
Redis
One option is that I take a Ubuntu image, create a container and start installing them one by one and done, start my server, and expose the ports.
But this can easily be done in a virtual box also, and it will not going to use the power of Docker. So for that I have to start building my own image with these packages. Now here is the question if I start writing my Dockerfile and if place the commands to download the Node js (and others) inside of it, this again becomes the same thing like virtualization.
What I need is that I start from Ubuntu and keep on adding the references of MongoDb, NodeJs, RabbitMq, Nginx and Redis inside the Dockerfile and finally expose the respective ports out.
Here are the queries I have:
Is this possible? Like adding the refrences of other images inside the Dockerfile when you are starting FROM one base image.
If yes then how?
Also is this the correct practice or not?
How to do these kind of things in Docker ?
Thanks in advance.
Keep images light. Run one service per container. Use the official images on docker hub for mongodb, nodejs, rabbitmq, nginx etc. Extend them if needed. If you want to run everything in a fat container you might as well just use a VM.
You can of course do crazy stuff in a dev setup, but why spend time setting up something that has zero value in a production environment? What if you need to scale up one of the services? How do set memory and cpu constraints on each service? .. and the list goes on.
Don't make monolithic containers.
A good start is to use docker-compose to configure a set of services that can talk to each other. You can make a prod and dev version of your docker-compose.yml file.
Getting into the right frame of mind
In a perfect world you would run your containers in clustered environment in production to be able to scale your system and have concurrency, but that might be overkill depending on what you are running. It's at least good to have this in the back of your head because it can help you to make the right decisions.
Some points to think about if you want to be a purist :
How do you have persistent volume storage across multiple hosts?
Reverse proxy / load balancer should probably be the entry point into the system that talks to the containers using the internal network.
Is my service even able run in a clustered environment (multiple instances of the container)
You can of course do dirty things in dev such as mapping in host volumes for persistent storage (and many people who use docker standalone in prod do that as well).
Ideally we should separate docker in dev and docker i prod. Docker is a fantastic tool during development as you can have redis, memcached, postgres, mongodb, rabbitmq, node or whatnot up and running in minutes sharing that compose setup with the rest of the team. Docker in prod can be a completely different beast.
I would also like to add that I'm generally against the fanaticism that "everything should be running in docker" in prod. Run services in docker when it makes sense. It's also not uncommon for larger companies to make their own base images. This can be a lot of work and will require maintenance to keep up with security fixes etc. It's not necessarily the first thing you jump on when starting with docker.
Suppose I have a webserver and a database server installed on the same common Docker image, Is it possible to run them simultaneously, as if they were running inside the same virtual machine?
Is it running docker run <args> twice the best practice for this use case?
You should not use a single image for your web server and the database. You should use one image for the web server and one for the database.
To run this, you would run your database server and then run your webserver and link it to your database server.
There are many examples on internet. I'll just leave this one here : https://github.com/saada/docker-compose-php-mysql
According to this stack overflow answer it is perfectly possible to do that via a script that takes charge of starting each of these services
Can I run multiple programs in a Docker container?
Although most people just tell you to micro service everything into multiple different containers. It might well be much more manageable in some cases to have containers that lunch more than one process if you think about cloud deployment where you might want to run multiple web apps each corresponding to a different system test.
So you would have your isolated small hsql db running In server mode followed by your wildly or springboot app and finally your system test by mvn..
If you have all three in one container... Then it is just a matter of choosing in which Jenkins node your all in one container is put to run. Since it packs all within irrespective of any other container and the image size is not monstrous... You are really agile. As an example.
So you have to see what is best for you.
With big dBs like mysql you are often better of running them on an isolated container as a base platform for all other docker containers. With dBs like hsql you can easily afford a db per container.