Does Release Management 2013 rollback across tags - rollback

We're heavy users of tags and I'm confused how tags and rollbacks interact together.
I understand that rollbacks cascade (at least within a sequence) from this article:
http://incyclesoftware.com/2014/03/understanding-rollbacks-release-management/
But I'm not clear how this would interact when you use tags, i.e. we tag servers by what features are installed on them (web, database, service) and vary the mix of features depending on environment (i.e. DEV might have web & services running on the same machine, but UAT & PROD would have seperate machines)
So does the rollback go back across the tag boundaries? If for example your sequence looked like this
+--Database tag --+
| Backup DB |
| | |
| Update DB |
| | | <- Runs against SQL server
| +--Rollback--+ |
| | Restore DB | |
| +------------+ |
+-----------------+
|
+---Web Tag-------+
| Do Stuff | <- Runs against WEB server
+-----------------+
|
+---Service tag----+
| Backup |
| | |
| Install new ver | <- Runs against Service server
| | |
| Smoke test |
| | |
| +--Rollback----+ |
| | Replace with | |
| | backup | |
| +--------------+ |
+------------------+
Would a roll back inside the service tag cause the database tag to execute it's rollback? Do rollbacks cascade across sequences?
I haven't had time to set this up yet and test so I thought I'd ask the question instead.

By accident I've managed to test this out with a suitable release and roll back does roll back across the tags as #joerage says.

It appears I was wrong... faulty memory and all that. Rollbacks work across tag boundaries.
I generally recommend against using rollback blocks, since their behavior is generally backwards, unpredictable, and not immediately obvious. The current best practice is actually to not use agent-based releases at all, as they will not be portable to the forthcoming Release Management Service.

Related

ROS services are not building properly

I am trying to build my custom ROS services. They are inside a another parent package
the structure is as follows:
|--catkine_ws
| |--src
| | |--Parent
| | | |--CMakeLists.txt
| | | |--package.xml
| | | |--ChildA
| | | | |--CMakeLists.txt
| | | | |--package.xml
| | | | |--srv
| | | | | |--SomeService.srv
| | | |--ChildB
The packages are building correctly and I am able to use them in other nodes and packags.
however when I try to use rossrv list the custom services do not appear. I think that this is causing some issues when I try to build my Simulink controller and it cannot find the service message definition.
Does any one have any idea what is going on?
I was able to fix the problem, while not obvious, the solution was rather simple. I had to change the slightly change the structure of the package by making the parent package a meta package then do some handling to make sure that the sub packages still had access to the cmakes to locate my external packages.

DRBD StandAlone with new resource

I have two nodes "A" (primary) & "B". Each node have 3 resources. On node "B" i replace disks and after that i add each resource as secondary. Two resources successfully connect and sync, but with one resource i have issue. When on node "B" i stop this resource, node "A" shows
lv1 role:Primary
disk:UpToDate
b.host connection:Connecting
When i starts this resource on node "B", node "A" shows:
lv1 role:Primary
disk:UpToDate
b.host connection:StandAlone
and node "B" shows:
lv1 role:Secondary
disk:Inconsistent
a.host connection:Connecting
I try everything, remove/add resource, metadata on node "B", remove data, rebuild raid drbdadm connect lv1 --discard-my-data etc.
One difference between worked an broken resources is This node was a crashed primary, and has not seen its peer since
+--< Current data generation UUID >-
| +--< Bitmap's base data generation UUID >-
| | +--< younger history UUID >-
| | | +-< older history >-
V V V V
E43824C7BC375B4A:626476078D91E933:CC1DC3FAD143EDCC:E4E71860FBA887C2:1:1:1:1:0:0:0:0:0:0:0:1
^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^
-< Data consistency flag >--+ | | | | | | | | | | |
-< Data was/is currently up-to-date >--+ | | | | | | | | | |
-< Node was/is currently primary >--+ | | | | | | | | |
-< This node was a crashed primary, and has not seen its peer since >--+ | | | | | | | |
-< The activity-log was applied, the disk can be attached >--+ | | | | | | |
-< The activity-log was disabled, peer is completely out of sync >--+ | | | | | |
-< This node was primary when it lost quorum >--+ | | | | |
-< Node was/is currently connected >--+ | | | |
-< The peer's disk was out-dated or inconsistent >--+ | | |
-< A fence policy other the dont-care was used >--+ | |
-< Node was in the progress of marking all blocks as out of sync >--+ |
-< At least once we saw this node with a backing device attached >--+
Any ideas how i can fix it?
UPD: Find new differences a kernel: drbd lv1/0 drbd1 b.host: The peer's disk size is too small! (999671944 < 1000196216 sectors)
In my case on node "A" (primary) i have LVM, on node "B" i have MDRAID. This differences give me difference in resource size in 524 272 sectors. What i do to save my data. I run both resources in primary mode, mount and copy data from "A" to "B", then i rebuild node "A" to MDRAID and sync resources.

Cypher load CSV eager and long action duration

im loading a file with 85K lines - 19M,
server has 2 cores, 14GB RAM, running centos 7.1 and oracle JDK 8
and it can take 5-10 minutes with the following server config:
dbms.pagecache.memory=8g
cypher_parser_version=2.0
wrapper.java.initmemory=4096
wrapper.java.maxmemory=4096
disk mounted in /etc/fstab:
UUID=fc21456b-afab-4ff0-9ead-fdb31c14151a /mnt/neodata
ext4 defaults,noatime,barrier=0 1 2
added this to /etc/security/limits.conf:
* soft memlock unlimited
* hard memlock unlimited
* soft nofile 40000
* hard nofile 40000
added this to /etc/pam.d/su
session required pam_limits.so
added this to /etc/sysctl.conf:
vm.dirty_background_ratio = 50
vm.dirty_ratio = 80
disabled journal by running:
sudo e2fsck /dev/sdc1
sudo tune2fs /dev/sdc1
sudo tune2fs -o journal_data_writeback /dev/sdc1
sudo tune2fs -O ^has_journal /dev/sdc1
sudo e2fsck -f /dev/sdc1
sudo dumpe2fs /dev/sdc1
besides that,
when running a profiler, i get lots of "Eagers", and i really cant understand why:
PROFILE LOAD CSV WITH HEADERS FROM 'file:///home/csv10.csv' AS line
FIELDTERMINATOR '|'
WITH line limit 0
MERGE (session :Session { wz_session:line.wz_session })
MERGE (page :Page { page_key:line.domain+line.page })
ON CREATE SET page.name=line.page, page.domain=line.domain,
page.protocol=line.protocol,page.file=line.file
Compiler CYPHER 2.3
Planner RULE
Runtime INTERPRETED
+---------------+------+---------+---------------------+--------------------------------------------------------+
| Operator | Rows | DB Hits | Identifiers | Other |
+---------------+------+---------+---------------------+--------------------------------------------------------+
| +EmptyResult | 0 | 0 | | |
| | +------+---------+---------------------+--------------------------------------------------------+
| +UpdateGraph | 9 | 9 | line, page, session | MergeNode; Add(line.domain,line.page); :Page(page_key) |
| | +------+---------+---------------------+--------------------------------------------------------+
| +Eager | 9 | 0 | line, session | |
| | +------+---------+---------------------+--------------------------------------------------------+
| +UpdateGraph | 9 | 9 | line, session | MergeNode; line.wz_session; :Session(wz_session) |
| | +------+---------+---------------------+--------------------------------------------------------+
| +ColumnFilter | 9 | 0 | line | keep columns line |
| | +------+---------+---------------------+--------------------------------------------------------+
| +Filter | 9 | 0 | anon[181], line | anon[181] |
| | +------+---------+---------------------+--------------------------------------------------------+
| +Extract | 9 | 0 | anon[181], line | anon[181] |
| | +------+---------+---------------------+--------------------------------------------------------+
| +LoadCSV | 9 | 0 | line | |
+---------------+------+---------+---------------------+--------------------------------------------------------+
all the labels and properties have indices / constrains
thanks for the help
Lior
He Lior,
we tried to explain the Eager Loading here:
And Marks original blog post is here: http://www.markhneedham.com/blog/2014/10/23/neo4j-cypher-avoiding-the-eager/
Rik tried to explain it in easier terms:
http://blog.bruggen.com/2015/07/loading-belgian-corporate-registry-into_20.html
Trying to understand the "Eager Operation"
I had read about this before, but did not really understand it until Andres explained it to me again: in all normal operations, Cypher loads data lazily. See for example this page in the manual - it basically just loads as little as possible into memory when doing an operation. This laziness is usually a really good thing. But it can get you into a lot of trouble as well - as Michael explained it to me:
"Cypher tries to honor the contract that the different operations
within a statement are not affecting each other. Otherwise you might
up with non-deterministic behavior or endless loops. Imagine a
statement like this:
MATCH (n:Foo) WHERE n.value > 100 CREATE (m:Foo {m.value = n.value + 100});
If the two statements would not be
isolated, then each node the CREATE generates would cause the MATCH to
match again etc. an endless loop. That's why in such cases, Cypher
eagerly runs all MATCH statements to exhaustion so that all the
intermediate results are accumulated and kept (in memory).
Usually
with most operations that's not an issue as we mostly match only a few
hundred thousand elements max.
With data imports using LOAD CSV,
however, this operation will pull in ALL the rows of the CSV (which
might be millions), execute all operations eagerly (which might be
millions of creates/merges/matches) and also keeps the intermediate
results in memory to feed the next operations in line.
This also
disables PERIODIC COMMIT effectively because when we get to the end of
the statement execution all create operations will already have
happened and the gigantic tx-state has accumulated."
So that's what's going on my load csv queries. MATCH/MERGE/CREATE caused an eager pipe to be added to the execution plan, and it effectively disables the batching of my operations "using periodic commit". Apparently quite a few users run into this issue even with seemingly simple LOAD CSV statements. Very often you can avoid it, but sometimes you can't."

Removal of Role PostgreSQL Failed - cache lookup failed for database

This is my first time using PostgreSQL for production.
I made a database blog_production with username blog_production and generated password from gemfile capistrano-postgresql. Once it is generated, I tried to delete database blog_production with this command from terminal:
$ sudo -u postgres dropdb blog_production
After that I tried to delete user blog_production with this command:
$ sudo -u postgres droprole blog_production
And it returned dropuser: removal of role "blog_production" failed: ERROR: cache lookup failed for database 16417
1.) Why is this happening?
2.) I also tried to delete from psql using DELETE FROM pg_roles WHERE rolname='blog_production' but it returned the same error (cache lookup failed)
3.) How do I solve this problem?
Thank you.
Additional Information
PostgreSQL Version
PostgreSQL 9.1.15 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit
(1 row)
select * from pg_shdepend where dbid = 16417;
dbid | classid | objid | objsubid | refclassid | refobjid | deptype
-------+---------+-------+----------+------------+----------+---------
16417 | 1259 | 16419 | 0 | 1260 | 16418 | o
16417 | 1259 | 16426 | 0 | 1260 | 16418 | o
16417 | 1259 | 16428 | 0 | 1260 | 16418 | o
(3 rows)
select * from pg_database where oid = 16417;
datname | datdba | encoding | datcollate | datctype | datistemplate | datallowconn | datconnlimit | datlastsysoid | datfrozenxid | dattablespace | datacl
---------+--------+----------+------------+----------+---------------+--------------+--------------+---------------+--------------+---------------+--------
(0 rows)
select * from pg_authid where rolname = 'blog_production'
rolname | rolsuper | rolinherit | rolcreaterole | rolcreatedb | rolcatupdate | rolcanlogin | rolreplication | rolconnlimit | rolpassword | rolvaliduntil
-----------------+----------+------------+---------------+-------------+--------------+-------------+----------------+--------------+-------------------------------------+---------------
blog_production | f | t | f | f | f | t | f | -1 | md5d4d2f8789ab11ba2bd019bab8be627e6 |
(1 row)
Somehow the DROP database; didn't drop the shared dependencies correctly. PostgreSQL still thinks that your user owns three tables in the database you dropped.
Needless to say this should not happen; it's almost certainly a bug, though I don't know how we'd even begin tracking it down unless you know exactly what commands you ran etc to get to this point, right from creating the DB.
If the PostgreSQL install's data isn't very big and if you can share the contents, can I get you to stop the database server and make a tarball of the whole database directory, then send it to me? I'd like to see if I can tell what happened to get you to this point.
Send a dropbox link to craig#2ndquadrant.com . Just:
sudo service postgresql stop
sudo tar cpjf ~abrahamks/abrahamks-postgres.tar.gz \
/var/lib/postgresql/9.1/main \
/etc/postgresql/9.1/main \
/var/log/postgresql/postgresql-9.1-main-*.
/usr/lib/postgresql/9.1
sudo chown abrahamks ~abrahamks/abrahamks-postgres.tar.gz
and upload abrahamks-postgres.tar.gz from your home folder.
Replace abrahamks with your username on your system. You might need to adjust the paths above if I'm misremembering where the PostgreSQL data lives on Debian-derived systems.
Note that this contains all your databases not just the one that was an issue, and it also contains your PostgreSQL user accounts.
(If you're going to send me a copy, do so before continuing):
Anyway, since the database is dropped, it is safe to manually remove the dependencies that should've been removed by DROP DATABASE:
DELETE FROM pg_shdepend WHERE dbid = 16417
It should then be possible to DROP USER blog_production;

Obtain Bacula status in parseable format

Is it possible to obtain the status of Bacula backup system Director in some parseable format?
It looks like the human-readable representation (one you can see when using bacula-console) is formed on the director side during the TCP control connection.
In what language? The easiest way would be to invoke bconsole and send command as stdin, then parse stdout and stderr.
Bacula has interactive mode in bconsole, but if you know commands in advance, this is not an issue.
You can also pull directly from the database, depending on your needs.
Example:
mysql> select JobId, Name, JobStatus from Job ORDER BY JobId DESC Limit 10;
+--------+-------------------------------------+-----------+
| JobId | Name | JobStatus |
+--------+-------------------------------------+-----------+
| 231215 | dbs16 Daily MysqlC XBM Snapshot | T |
| 231214 | dbs09 Daily MysqlS XBM Snapshot | T |
| 231213 | dbs10 Daily MysqlQ XBM Snapshot | T |
| 231212 | dbs11 Daily MysqlT XBM Snapshot | T |
| 231211 | dbs16 Daily MysqlI XBM Snapshot | T |
| 231210 | dbs19 Daily MysqlE XBM Snapshot | T |
| 231209 | dbs18 Daily MysqlB XBM Snapshot | R |
| 231208 | dbs17 Daily MysqlG XBM Snapshot | R |
| 231207 | Daily Catalog Backup | C |
| 231206 | adm6 svnops SVN Backup | R |
+--------+-------------------------------------+-----------+

Resources