Stack is Rails 3 / Postgres + mongrel. I recently had to increase the connection pool because hits on one of the mongrels were always timing out. I reasoned that with 3 mongrels + a delayed_job running on each, I'd need 6 connections in the pool (it was set to 5). I increased this to 10 in database.yml and it resolved the timeout issues, now though when I monitor connections in PG I am seeing this sort of thing;
SELECT datname,usename,procpid,client_addr,waiting,query_start,current_query FROM pg_stat_activity;
db1 | www-data | 8658 | | f | 2014-03-19 10:03:54.084825+00 | <IDLE>
db1 | www-data | 9071 | | f | 2014-03-19 09:58:42.306558+00 | <IDLE>
db1 | www-data | 8721 | | f | 2014-03-19 10:03:53.980691+00 | <IDLE>
db1 | www-data | 8722 | | f | 2014-03-19 10:03:53.874443+00 | <IDLE>
db1 | www-data | 8733 | | f | 2014-03-19 10:04:20.380137+00 | <IDLE>
db1 | www-data | 9080 | | f | 2014-03-19 10:00:54.157541+00 | <IDLE>
db1 | www-data | 10843 | | f | 2014-03-19 10:04:18.506355+00 | <IDLE>
#and so on and so on for more than 20 instances...
It baloons up to more than 20 connections and seemingly isn't closing them (I'm assuming that the presence of still means they're open, just not doing anything). It does seem to go up and down, so some connections are being closed.
I thought rails/activerecord was supposed to close its connections automatically but this doesn't seem to be the case.
Have I read this correctly? Do I have a leak somewhere? What could be causing it?
When using ActiveRecord transactions outside of actions initiated by a controller e.g. in a delayed job, you must use the following syntax to ensure connections are returned to the pool
ActiveRecord::Base.connection_pool.with_connection do
#your code here
end
Related
Cross-site dupe (no answer).
Basically I have to set up incremental backup system for mariadb. Everything is living in docker containers, and I'm trying to use wal-g (similar to how easy I can set it up with postgres and WAL archiving) together with mariabackup. Backup task is executed within separate docker container based on mariadb:10.8.3-jammy image with wal-g binary added (I'm running wal-g backup-push). During debugging I use exporting to data folder (mounted as docker volume to /bak).
I'm presenting more details below, but let me begin with the actual problem: incremental backups are large. If I disable all DB clients and repeat the same backup a few times in a row (so nothing changes between them), initial backup is approx. 30
Mb and all consequent incremental backups are 30 Mb each.
# Initial
$ sudo docker compose run --rm db_backup wal-g backup-push
[+] Running 1/0
⠿ Container reporting-db-1 Running 0.0s
INFO: 2022/11/20 19:24:01.865762 FILE PATH: stream_20221120T192400Z/stream.lz4
INFO: 2022/11/20 19:24:01.865984 Backup sentinel: {"StartLocalTime":"2022-11-20T19:24:00.366507Z","StopLocalTime":"2022-11-20T19:24:01.86589Z","UncompressedSize":350375370,"CompressedSize":227476734,"Hostname":"b08f12bbbb68"}
# Incremental
$ sudo docker compose run --rm db_backup wal-g backup-push
[+] Running 1/0
⠿ Container reporting-db-1 Running 0.0s
INFO: 2022/11/20 19:24:07.109202 FILE PATH: stream_20221120T192405Z/stream.lz4
INFO: 2022/11/20 19:24:07.109509 Backup sentinel: {"StartLocalTime":"2022-11-20T19:24:05.639259Z","StopLocalTime":"2022-11-20T19:24:07.109321Z","UncompressedSize":28104928,"CompressedSize":14609954,"Hostname":"3d9854550254"}
If I provide an empty database as data source, then the result is much more comical: both initial and incremental backups are of the same size (+- a few Kb).
I supposed that mariabackup dumps system tables, which use Aria storage engine and do not support incremental backup (source). However, manual verification seems to disagree: I tried converting all mysql.* tables to InooDB (ALTER TABLE ... engine=innodb) and re-creating the backup, getting 1.5x larger incremental dumps (though compressed results are slightly better).
# Initial
$ sudo docker compose run --rm db_backup wal-g backup-push
[+] Running 1/0
⠿ Container reporting-db-1 Running 0.0s
INFO: 2022/11/20 18:56:06.254782 FILE PATH: stream_20221120T185602Z/stream.lz4
INFO: 2022/11/20 18:56:06.254984 Backup sentinel: {"StartLocalTime":"2022-11-20T18:56:02.328882Z","StopLocalTime":"2022-11-20T18:56:06.254911Z","UncompressedSize":366630153,"CompressedSize":229793575,"Hostname":"843e27ceff9e"}
# Incremental 1
$ sudo docker compose run --rm db_backup wal-g backup-push
[+] Running 1/0
⠿ Container reporting-db-1 Running 0.0s
INFO: 2022/11/20 18:56:15.123822 FILE PATH: stream_20221120T185610Z/stream.lz4
INFO: 2022/11/20 18:56:15.124035 Backup sentinel: {"StartLocalTime":"2022-11-20T18:56:10.974406Z","StopLocalTime":"2022-11-20T18:56:15.12395Z","UncompressedSize":44359711,"CompressedSize":13798326,"Hostname":"26a6e38afb59"}
# Incremental 2
$ sudo docker compose run --rm db_backup wal-g backup-push
[+] Running 1/0
⠿ Container reporting-db-1 Running 0.0s
INFO: 2022/11/20 19:06:57.626672 FILE PATH: stream_20221120T190653Z/stream.lz4
INFO: 2022/11/20 19:06:57.626904 Backup sentinel: {"StartLocalTime":"2022-11-20T19:06:53.667772Z","StopLocalTime":"2022-11-20T19:06:57.626823Z","UncompressedSize":44360402,"CompressedSize":13799019,"Hostname":"184342c39610"}
Unfortunately, I cannot ask my client to switch to convenient DBMS and have to do something here. I do not want to store these 30 Mb for every backup, if it can be avoided.
Is my reasoning ok? What else can cause this weird behaviour?
Can I just convert all system tables to InnoDB? I found evidence that it can be harmful on mysql 5.7, but cannot find more recent references. It'd resolve the problem, because then everything will support incremental backup properly. (Dupe? Not really).
Are there any alternative backup solutions that can handle described situation better?
Can I give up and prevent mariabackup from backing system tables up? I doubt it can be a viable solution (because the more complete the backup is, the easier to live with it), but may be wrong.
Side questions:
How can I examine the binary stream outputted by mariabackup somehow to confirm that system tables are the actual problem (and perhaps find out which tables exactly)?
What can cause the dump size fluctuations? Whenever I run multiple incremental backups in a row, the compressed and uncompressed size is slightly different every time (it can either increase or decrease compared to the previous run). Backup should be a deterministic process, and all actions above are performed on the same database without any modifications in between (I started from local dump which was loaded to new mariadb container with clean volumes, and no clients have access to that instance, so nothing should differ) - then why do I observe this? I checked with mariabackup without wal-g tool, and the size is stable then. It is introduced on some higher level, and this is less interesting.
Everything described above reproduces with plain mariabackup as well, generating approx. 27 Mb files per incremental backup.
mariabackup wrapper script:
last_lsns=$(ls /bak/lsns/ | sort -rn | head -n1)
if [ -n "$last_lsns" ]; then
ex="/bak/lsns/lsn_$(date +%s)"
mkdir -p "$ex"
mariabackup -H"$WEB_DB_HOST" -uroot -p"$MYSQL_ROOT_PASSWORD" --backup \
--stream=xbstream --datadir=/var/lib/mysql \
--incremental-basedir=/bak/lsns/$last_lsns --extra-lsndir=$ex
else
mkdir -p /bak/lsns/initial
mariabackup -H$WEB_DB_HOST -uroot -p"$MYSQL_ROOT_PASSWORD" --backup \
--stream=xbstream --datadir=/var/lib/mysql \
--extra-lsndir=/bak/lsns/initial
fi
This script is used as WALG_STREAM_CREATE_COMMAND. I have also
WALG_MYSQL_DATASOURCE_NAME='root:$MYSQL_ROOT_PASSWORD#tcp($REPORTING_DB_HOST:$REPORTING_DB_PORT)/$REPORTING_DATABASE'
WALG_FILE_PREFIX='/bak/foo'
and these settings (in fact written in compose file, but it's probably not important) seem to be correct (backups are created as expected ad written to proper directories).
Here are storage types used:
> select table_schema, table_name, engine from information_schema.tables where table_schema <> 'performance_schema' and engine <> 'MEMORY';
+--------------------+---------------------------+--------+
| table_schema | table_name | engine |
+--------------------+---------------------------+--------+
| information_schema | ALL_PLUGINS | Aria |
| information_schema | CHECK_CONSTRAINTS | Aria |
| information_schema | COLUMNS | Aria |
| information_schema | EVENTS | Aria |
| information_schema | OPTIMIZER_TRACE | Aria |
| information_schema | PARAMETERS | Aria |
| information_schema | PARTITIONS | Aria |
| information_schema | PLUGINS | Aria |
| information_schema | PROCESSLIST | Aria |
| information_schema | ROUTINES | Aria |
| information_schema | SYSTEM_VARIABLES | Aria |
| information_schema | TRIGGERS | Aria |
| information_schema | VIEWS | Aria |
| mysql | slow_log | CSV |
| mysql | db | Aria |
| mysql | help_relation | Aria |
| mysql | general_log | CSV |
| mysql | innodb_index_stats | InnoDB |
| mysql | servers | Aria |
| mysql | time_zone_transition_type | Aria |
| mysql | gtid_slave_pos | InnoDB |
| mysql | time_zone | Aria |
| mysql | roles_mapping | Aria |
| mysql | transaction_registry | InnoDB |
| mysql | procs_priv | Aria |
| mysql | proxies_priv | Aria |
| mysql | global_priv | Aria |
| mysql | func | Aria |
| mysql | innodb_table_stats | InnoDB |
| mysql | help_topic | Aria |
| mysql | time_zone_leap_second | Aria |
| mysql | help_keyword | Aria |
| mysql | time_zone_transition | Aria |
| mysql | event | Aria |
| mysql | columns_priv | Aria |
| mysql | tables_priv | Aria |
| mysql | time_zone_name | Aria |
| mysql | plugin | Aria |
| mysql | table_stats | Aria |
| mysql | index_stats | Aria |
| mysql | proc | Aria |
| mysql | help_category | Aria |
| mysql | column_stats | Aria |
| test_reporting | merchant_configs | InnoDB |
| test_reporting | masterdata_prediction | InnoDB |
| test_reporting | aggregator_config | InnoDB |
| test_reporting | masterdata | InnoDB |
| test_reporting | fixed_costs | InnoDB |
| test_reporting | timeline | InnoDB |
| test_reporting | migrations | InnoDB |
| test_reporting | user_analytics | InnoDB |
| test_reporting | affiliate | InnoDB |
| test_reporting | vat_config | InnoDB |
| sys | sys_config | Aria |
| reporting | merchant_configs | InnoDB |
| reporting | masterdata_prediction | InnoDB |
| reporting | aggregator_config | InnoDB |
| reporting | masterdata | InnoDB |
| reporting | fixed_costs | InnoDB |
| reporting | timeline | InnoDB |
| reporting | migrations | InnoDB |
| reporting | user_analytics | InnoDB |
| reporting | affiliate | InnoDB |
| reporting | vat_config | InnoDB |
| reporting_web | record_change | InnoDB |
| reporting_web | jwt_expiry | InnoDB |
| reporting_web | forecasting | InnoDB |
| reporting_web | users | InnoDB |
| reporting_web | alembic_version | InnoDB |
| reporting_web | merchant_analytics | InnoDB |
| reporting_web | email_config | InnoDB |
| reporting_web | user_company | InnoDB |
| reporting_web | user_detail | InnoDB |
+--------------------+---------------------------+--------+
As referenced MDEV-23614, Aria system tables cannot be incrementally backed up.
As you saw, system tables can be changed to InnoDB in 10.4+. Azure do this by default.
Two small issues prevent this being default under --enforce-storage-engine=InnoDB, both related to help tables:
CREATE TABLE IF NOT EXISTS help_relation has a FK reference to help_keyword, but help_keyword isn't created (easy fix, swap order in mysql_system_tables.sql)
In fill_help_tables.sql, lock tables help_topic write, help_category write.. once its InnoDB, returns ER_WRONG_LOCK_OF_SYSTEM_TABLE, can be commented out, but really is a bug that needs reporting/fixing.
To save space, help table and the proc tables are the biggest ones.
help tables, are optional for HELP command syntax. And could be truncated/removed.
proc, by default, contains a bunch of gis functions you may not need.
both could be converted to InnoDB.
Alternatives:
Use binary logs as a PITR alternate mechanism and perform less incremental mariabackups further apart.
Contribute a patch to mariabackup.
I have two containers running in a swarm. Each exposes a /stats endpoint which I am trying to scrape.
However, using the swarm port obviously results in the queries being load balanced and therefore the stats are all intermingled:
+--------------------------------------------------+
| Server |
| +-------------+ +-------------+ |
| | | | | |
| | Container A | | Container B | |
| | | | | |
| +-------------+ +-------------+ |
| \ / |
| \ / |
| +--------------+ |
| | | |
| | Swarm Router | |
| | | |
| +--------------+ |
| v |
+-------------------------|------------------------+
|
A Stats
B Stats
A Stats
B Stats
|
v
I want to keep the load balancer for application requests, but also need a direct way to make requests to each container to scrape the stats.
+--------------------------------------------------+
| Server |
| +-------------+ +-------------+ |
| | | | | |
| | Container A | | Container B | |
| | | | | |
| +-------------+ +-------------+ |
| | \ / | |
| | \ / | |
| | +--------------+ | |
| | | | | |
| | | Swarm Router | | |
| v | | v |
| | +--------------+ | |
| | | | |
+--------|----------------|----------------|-------+
| | |
A Stats | B Stats
A Stats Normal Traffic B Stats
A Stats | B Stats
| | |
| | |
v | v
A dynamic solution would be ideal, but since I don't intend to do any dynamic scaling something like hardcoded ports for each container would be fine:
::8080 Both containers via load balancer
::8081 Direct access to container A
::8082 Direct access to container B
Can this be done with swarm?
From inside an overlay network you can get IP-addresses of all replicas with tasks.<service_name> DNS query:
; <<>> DiG 9.11.5-P4-5.1+deb10u5-Debian <<>> -tA tasks.foo_test
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19860
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;tasks.foo_test. IN A
;; ANSWER SECTION:
tasks.foo_test. 600 IN A 10.0.1.3
tasks.foo_test. 600 IN A 10.0.1.5
tasks.foo_test. 600 IN A 10.0.1.6
This is mentioned in the documentation.
Also, if you use Prometheus to scrape those endpoints for metrics, you can combine the above with dns_sd_configs to set the targets to scrape (here is an article how). This is easy to get running but somewhat limited in features (especially in large environments).
A more advanced way to achieve the same is to use dockerswarm_sd_config (docs, example configuration). This way the list of endpoints will be gathered by querying Docker daemon, along with some useful labels (i.e. node name, service name, custom labels).
While less than ideal, you can introduce a microservice that acts as an intermediary to the other containers that are exposing /stats. This microservice would have to be configured with the individual endpoints and operate in the same network as said endpoints.
This doesn't bypass the load balancer, but instead makes it so it does not matter.
The intermediary could roll-up the information or you could make it more sophisticated by passing a list of opaque identifiers which the caller can then use to individually query the intermediary.
It is slightly "anti-pattern" in the sense that you have a highly coupled "stats" proxy that must be configured to be able to hit each endpoint.
That said, it is good in the sense that you don't have to expose individual containers outside of the proxy. From a security perspective, this may be better because you're not leaking additional information out of your swarm.
You can try to publish a specific container port on a host machine
,add to your services:
ports:
- target: 8081
published: 8081
protocol: tcp
mode: host
Let's say I have the following FitNesse page:
!| com.myproject.fitnesse.fixtures.SSHFixture |
| set host | ${hostSi1} |
| set port | ${port} |
| set user | ${user} |
| connect |
| show | run command | pwd |
| disconnect |
www.<variable>.com
The page contains one table and a link. The table will execute the console command pwd. How do I save the result of that command in a FitNesse Variable? I want then to use the variable within the same page. For example in the mentioned link.
Some resources are mentioning SLIM style, but I have no idea how to accomplish that in my case:
Using data from fitnesse table as a variable
#Fried Hoeben: Yes its a script. Got a solution from my colleague.
Let's say you have Fixture for DB stuff and there's method called 'execute query' which will return result of the query.
Setting value to variable 'myname' so we could use in another fixture table as '#{myname}'.
!| com.mystest.fitnesse.fixtures.DBFixture |
| set database | ${dbName} |
| set username | ${dbUser} |
| set password | ${dbPassword} |
| connect | ${dbType} | to | ${dbUrl} | database | ${dbPort} |
| set | myname | execute query | SELECT name FROM customer WHERE id = 1 |
| disconnect |
Use of variable 'myname':
!| com.mytest.fitnesse.fixtures.SSHFixture |
| set host | ${host} |
| set port | ${port} |
| set user | ${user} |
| connect |
| show | execute|echo #{myname} |
| disconnect |
Not sure if the feature of set is part of fitnesse default impl. or of our company implementation.
I found numerous sites explaining ssh port forwarding, ssh reverse proxy, ssh multiplexing etc. involving sshpiper, sslh, running a ssh socks server, just configuring the local SSH server an so on..
so I'm quite puzzled right now and might ask a very common and/or simple question:
As you might already guess from the title I want to set up a git server (GitLab) inside a docker container listening for SSH connections on port 22 without having to use a different port for default ssh operations (terminal, scp, etc..) on the host (as suggested here)
I.e.
ssh alice#myserver.org should still be possible as well as
git clone git#myserver.com:path/to/project
and I don't want to do any setup on the client computer
If you prefer a picture:
+------ myserver.org --------+
| +----+ +- typical -+ |
+--------+ alice#myserver.org:22 | | | | SSH | |
| client | ----------------------> -+--+----+---->| service | |
+--------+ all names but `git` | | ? | +-----------+ |
| | | |
| | ? | +- docker --+ |
+--------+ git#myserver.org:22 | | | | with | |
| client | ----------------------> -+--+----+---->| GitLab | |
+--------+ only user `git` | | | | | |
| +----+ +-----------+ |
+----------------------------+
Can you tell me what's the recommended/most common way to do this? This question sounds promising but the answer seems to configure the client (which I want to avoid)
This project may help you.
https://github.com/tg123/sshpiper.
SSH Piper works as a proxy-like ware, and route connections by username, src ip , etc.
+---------+ +------------------+ +-----------------+
| | | | | |
| Bob +----ssh -l bob----+ | SSH Piper +-------------> Bob' machine |
| | | | | | | |
+---------+ | | | | +-----------------+
+---> pipe-by-name--+ |
+---------+ | | | | +-----------------+
| | | | | | | |
| Alice +----ssh -l alice--+ | +-------------> Alice' machine |
| | | | | |
+---------+ +------------------+ +-----------------+
Downstream SSH Piper Upstream
First of all, thanks for reading TheDockerExperts blog , hope our articles help you! Let me explain how we do SSH proxy in our company.
We have HAproxy that listens TCP 22 port and sends traffic to GitLab server, on host we have custom SSH port. Unfortunately as we use TCP balancing in this case, there is no way to create balancer based on domain names and users. You can take small VPS , spin up HAproxy on it and use it to balance your GIT traffic.
Hope this will help you!
While meddling with an experimental Dreamfactory 2.1 installation, the user service was accidentally disabled through the admin console. The message when trying to log in is
Service user is deactivated
How to get around this problem? Is there a configuration file or something that I need to edit to get this back on? After a bit of probing I saw this in the table called "service" in MySQL db(bitnami_dreamfactory).
+-------------------------+-----------+
| name | is_active |
+-------------------------+-----------+
| system | 1 |
| api_docs | 1 |
| files | 0 |
| db | 0 |
| email | 0 |
| user | 0 |
| mysql | 0 |
| mongodb | 1 |
| scr-insert | 1 |
| testdb | 1 |
| test-mlabs | 1 |
+-------------------------+-----------+
Can I just go ahead an issue an update statement to set 0 to 1, for 'user' service?
Thanks,
M&M
Yes, and then clear the application cache using 'php artisan cache:clear'.