Better approach to docker images - docker

I'm new in docker so I want to know what is the better approach to use it. I have a Project that needs three components to work:
Jboss server application
PostgreSQL
A spring boot application
So, based on it my questions are:
1) Should I have one docker image for each component mentioned above? If yes, why not just put all together? My idea of docker is simplify the deploy of a application so put all together will make easy to install this app in another environment, right?
2) If yes (one docker image per component), spring boot is just a "Java -jar" command is really necessary have a docker image to it?
3) In case of PostgreSQL should I have the image with all my database structure and data or just vanilla PostgreSQL without anything?

To answer your questions
1) should I have one docker image for each component mentioned above ?
If yes, why not just put all together? My idea of docker is simplify
the deploy of a application so put all together will make easy to
install this app in another environment, right?
It is best to put them on a separate components so that:
You can isolate cases(will help you in debugging)
You can selectively scale(horizontally) specific stateless components when you run on k8s or docker-swarm
You can set hardware limit(RAM, CPU, etc) per component
You have different base images(might be useful for optimizations)
You want to build & test your components independently
List goes on
2) if yes (one docker image per component), spring boot is just a
"Java -jar" command is really necessary have a docker image to it?
Please check the list mentioned above (why it's best to separate) if it fits your use case. Note that adding it to existing components will affect your scaling strategy
Example - you run 3 instances of jboss component with spring boot app, you will spawn 3 instances for both of them w/c you might not want.
3) in case of PostgreSQL should I have the image with all my database
structure and data or just vanilla PostgreSQL without anything?
I would recommend that you mount your structure & data to the host volume, so that it doesn't get lost when the image is restarted. see example so i'll recommend using vanilla postgres
I hope this helps you in some way

Related

How to connect Docker Hasura to more multiple Postgre database?

Says that I have a Hasura running in Container (inside a Kubernetes) and I want this container hasura to connect to 3 different Postgre databases.
Is there a way to configure this without using Hasura console web page, since this has thing to do with scaling later on.
You can use the pg_add_source API to dynamically add new Database sources to Hasura.
Conversely, you can use pg_drop_source to remove them.
The above approaches would work in a dynamic environment where Databases are being added and removed reguarly. If they're more static, you might want to consider programatically manipulating the Metadata files and then applying the changes using metadata apply instead.

MariaDB settings in Docker

A while back I created an instance of mariadb inside a docker container on a machine running Ubuntu. I've since learned that I'll need to update some settings to keep things running smoothly, but when I created the image, I did not specify any .cnf volumes. How do I update/create a .cnf file for this image? I'm a complete newb when it comes to docker, so please spoon-feed me.
I've tried accessing the file from within the image, but there are no text editors.
The defaults of MariaDB work pretty much out of the box (container) for small instances. You should only need to change setting when problems occur.
If you have spare memory you can increase your innodb_buffer_pool_size.
With the mariadb container, you don't need to edit the .cnf files, you can just add a few options on the command line per the docs (that you should defiantly read).
Recommend using the defaults for a while, and if you encounter problems, include a new question on dba.stackexchange.com that includes show global status output and specifics on the queries that are slow (show create table TBLNAME / explain QUERY).

How to enable caching in ArangoDB via Docker or arangojs?

I would like to enable caching in ArangoDB, automatically when my app start.
I'm using docker-compose to start the whole thing but apparently there's no simple parameter to enable caching in ArangoDB official image.
According to the doc, all the files in /docker-entrypoint-initdb.d/ are executed at container start. So I added a js file with that code:
require('#arangodb/aql/cache').properties({mode: 'on'});
It is indeed executed but caching doesn't seem to be enabled (from what I see with arangosh within the container).
My app is a JS app using arangojs, so if I can do it this way, I'd be happy too.
Thanks!
According to the performance and server config docs, you can enable caching in several ways.
Your method of adding require("#arangodb/aql/cache").properties({ mode: "on" }); to a .js file in the /docker-entrypoint-initdb.d/ directory should work, but keep an eye on the logs. You may need to redirect log output with a different driver (journals, syslog, etc.) to see what's going on. Make sure to run the command via arangosh to see if it works.
If that's a bust, you might want to see if there is a way to pass parameters at runtime (such as --query.cache-mode on). Unfortunately, I don't use Docker Compose, so I can't give you direct advice here, but try something like -e QUERY.CACHE-MODE=ON
If there isn't a way to pass params, then you could modify the config file: /etc/arangodb3/arangod.conf.
And don't forget about the REST API methods for system management. You can access AQL configuration (view and alter) in the Web UI by clicking on the Support -> Rest API -> AQL.
One thing to keep in mind - I'm not sure if the caching settings are global or tied to a specific database. View the configuration on multiple databases (including _system) to test the settings.

defining a group of services to run with docker compose

I'm wondering if there is a method to define, and launch, a group of services configured in a docker-compose.yml file.
To make a real world example, I'm working with laradock: it has a lot of services configured (I think more than 50) - You have to "select" which one to run every time.
In fact to run a normal php + apache + mysql stack, you can use:
docker compose up workspace apache2 mysql
The final question is: can those three services be grouped under an alias, like "amp" and this alias used to launch these services with:
docker compose up amp ?
What I have tried, already
I thought about duplicating the docker-compose.yml into a simpler one, where only the required services are present.
Anyway this configuration I'm using (laradock) it's quite complex, being able to define an alias, would lead to a much easier to handle configuration.
Imagine the case where you need to add one more service to the group: instead of doing cut & paste of it's configuration(s), you just add it's name, and nothing else.
Is this possibile somehow?
Thank you
One simple way that might do what you want is to create an alias for your shell/terminal. To do that in a "permanent" way you might need to edit the ~/.bashrc or ~/.bash_profile files. For reference you can check, for example, this link here.
That way if you want to change the services of a "group" (which will be defined by the alias), you will just need to edit the line for that alias in the file.

dropwizard get on demand jdbi connection

I have a simple CRUD application with backend code in dropwizard. The entire app just comprises of simple resource classes and crud operations except one case where some business logic is involved.
I am trying to extract this into a service instead of putting it in the resource class itself. But for that my service would need an ondemand jdbi connection to access data and do its thing.
All my connect strings and config values are in YML file. Since this app would be running on different servers with different yml files, I dont want to hardcode the yml file name in order to read it again, to get the connect strings and do it that way.
How do I achieve this?
Can you detect what environment you are on?
If so, can you do something like ${environment}.yml?
There is Configuration project on apache which might help.
Otherwise, is it a case of in dev you want to run
java -jar app.jar server dev.yml
and in prod you want to run java -jar app.jar server prod.yml? I imagine you have separate daemons in each environment. So, those environment's will pick up the right configuration, if you've configured them that way.
Otherwise, if the property names are the same, but their values differ, and you pick up the right yml in the right environment, things should work.
If I haven't addressed your question, can you please elaborate your problem a little more?

Resources