I currently have 2 Firebase Realtime DB
meetingapp (default)
meetingnotes
They are 2 different Databases under the same account.
The first one (meetingapp) has the following structure:
/ meetingapp / prod / uid
the second one (meetingnotes) has the following structure:
/ meetingnotes / label / definition
With Zapier I can easily add to the path /meetingapp/prod/uid, but I cannot find any reference on how to refer to the path of the 2nd db.
I think this is the problem with having 2 "roots", and I am hoping someone has successfully write to the 2nd root.
Related
I am accessing an IMAP server using Python imaplib.
I am doing this for three different mailboxes box1#mydomain.de, box2#mydomain.de, other#otherdomain.de.
In each case, I want to access the folder INBOX.
In each case, a folder listing confirms that this folder exists at the top level:
b'(\HasNoChildren) "." "INBOX"'
All three mailboxes are at the same ISP and are accessed via the same IMAP server name.
The code is working fine for box2 and other.
For box1, however, it fails with this error message:
Couldn't select folder A/INBOX: NO / [CANNOT] Non-supported characters in the mailbox name
I have determined that A will always be the name of the first subfolder:
b'(\HasChildren) "." "A"'
If I rename A to Z, the next existing subfolder will take the place of A in the error message.
Question 1: Why is the server prepending a subfolder when I ask for INBOX?
I gather that the error message tells me that the / in the name is a problem,
because the server uses . as a folder name separator?
Question 2: Why is the server so silly to construct a name it will not accept?
Question 3: What can I do to repair the mailbox? (I have no control of that server.)
Question 4: Does anybody know a programmer who has ever written code using IMAP and thinks IMAP is a good protocol? (semi-serious question)
Thank you #Max, your request for code did the job. I used the debugger and found the problem in the framework I am using:
I am using a subclass of mailprocessing.processor.imap.ImapProcessor that turns what is meant as a command-line tool into an API for my script.
ImapProcessor, unless told otherwise, by default uses this logic in its constructor:
# This should catch at least some of these weird IMAP servers
# that store everything under INBOX. Use --folder-prefix for
# the rest for now.
if cmd_prefix is None and root_has_children:
self.prefix = root_folder
and this root_folder is simply the first folder in the folder list,
in my case A.
If I construct my object as ImapProcessor(..., folder_prefix=""), all works as intended.
(Overall, mailprocessing has nice functionality, but also a number of weird decisions, plus two bugs to get over.)
So the answers to my questions 1, 2, 3 are "it doesn't", "it isn't", "no repair needed". Instead, my code simply called a functionality I had not expected to exist.
There is a way to get the google account name running the DAGs from the DAG definition?
This will be very helpful to track which users was running the DAGs.
I'm only see :
unixname --> always airflow
owner --> fixed in the dag definition
Regards
Eduardo
Possible as DAGs in Composer are essentially GCS objects. The GCS object GET API does tell you who uploads that object. So here's one possible way of getting owner info:
Define a function user_lookup() in your DAG definition.
The implementation of user_look() consists of the following steps: a) gets the current file path (e.g., os.path.basename(__file__)); b) based on how Composer GCS objects are mapped locally, determines the corresponding GCS object path (e.g., gs://{your-bucket}/object); c) reads GCS object details and returns object.owner.
In your DAG definition, set owner=user_lookup().
Let us know whether the above works for you.
I'm new to Laravel (using 5.6) and can't get my links to work.
My directory structure is: resources/views/pages/samples
In the samples directory, I have 10 blade files I want to link to (named "sample1.blade.php", etc.). I have a "master" links page in the pages directory (one level up from samples).
I've tried the following but can't get any of them to work correctly...
Sample 1
Sample 1
Sample 1
Sample 1
...and a few other variations.
I've also tried adding a base tag to the HTML header but that doesn't help.
Every time I click a link, it says "Sorry, the page you are looking for could not be found."
What am I missing?
Thanks #happymacarts, I didn't realize I had to add a path for every single page in my site.
After adding the paths, the links are working.
I will get into the practice of updating the paths every time I add a page.
I've configured a Jenkins job that creates a CloudFormation stack which contains among other things creation of a RDS DB instance and restores the DB instance from the latest available snapshot for that database in a specific environment.
There are 3 different environments; Dev, Stg and Prd and each environment has its own database.
Currently, when a user chooses to Build with parameters, he's asked to choose among other stuff, Environment from a Choice Parameter list and a RDS snapshot ID from an Extended Parameter list which is populated (by running some Groovy code) with the latest RDS snapshot IDs of each one of the databases (Dev, Stg, Prd).
So basically, the user needs to manually select Environment name and RDS snapshot ID.
In order to avoid human errors such as choosing Prd as Environment and Dev for RDS Snapshot ID, I'd like to configure the RDSSnapshotId parameter (the one which is populated by a Groovy script) to be set conditionally by the Environment selected.
Meaning that if the user selects Dev the RDSSnapshotId parameter will be populated with the corresponding RDS Snapshot IDs per that Environment.
Can it be done?
Create a new project that runs a Groovy script which creates a file in your CloudFormation job's folder according to 4. below and then runs your CloudFormation job.
Select Extended Choice Parameter → ◉ Multi-level Parameter Types
Select Parameter Type: Multi-Level Single-Select ▼
Property File: <CloudFormation job's folder>/ChoiceParameters.txt
Looks like:
Environment→RDS Snapshot ID
Dev→Dev ID 1
Dev→Dev ID 2
Stg→Stg ID 1
Stg→Stg ID 2
Prd→Prd ID 1
Prd→Prd ID 2
Value: Environment,RDS Snapshot ID
That's the theory, at least. Unfortunately it doesn't work in my Jenkins v2.73.3 with
Extended Choice Parameter Plug-In v0.76 atm (no second level items are displayed after selecting a first level item), but I know that it worked with previous versions.
Got it: The parameter's Name must not contain spaces, of course, since it becomes an environment variable.
UPDATE
I tried the example mentioned at Extended Choice Parameter plugin → Version 0.44 (Jun 02, 2015) → Advanced Ex: and it looks promising at first sight.
I have a data model that starts with a single record, this has a custom "recordId" that's a uuid, then it relates out to other nodes and they then in turn relate to each other. That starting node is what defines the data that "belongs" together, as in if we had separate databases inside neo4j. I need to export this data, into a backup data-set that can be re-imported into either the same or a new database with ease
After some help, I'm using APOC to do the export:
call apoc.export.cypher.query("MATCH (start:installations)
WHERE start.recordId = \"XXXXXXXX-XXX-XXX-XXXX-XXXXXXXXXXXXX\"
CALL apoc.path.subgraphAll(start, {}) YIELD nodes, relationships
RETURN nodes, relationships", "/var/lib/neo4j/data/test_export.cypher", {})
There are then 2 problems I'm having:
Problem 1 is the data that's exported has internal neo4j identifiers to generate the relationships. This is bad if we need to import into a new database and the UNIQUE IMPORT ID values already exist. I need to have this data generated with my own custom recordIds as the point of reference.
Problem 2 is that the import doesn't even work.
call apoc.cypher.runFile("/var/lib/neo4j/data/test_export.cypher") yield row, result
returns:
Failed to invoke procedure apoc.cypher.runFile: Caused by: java.lang.RuntimeException: Error accessing file /var/lib/neo4j/data/test_export.cypher
I'm hoping someone can help me figure out what may be going on, but I'm not sure what additional info is helpful. No one in the Neo4j slack channel has been able to help find a solution.
Thanks.
problem1:
The exported file does not contain any internal neo4j ids. It is not safe to use neo4j ids out of the database, since they are not globally unique. So you should not use them to transfer data from one database to another.
If you are about to use globally uniqe ids, you can use an external plugin like GraphAware UUID plugin. (disclaimer: I work for GraphAware)
problem2:
If you cannot access the file, then possible reasons:
apoc.import.file.enabled=true is not set in neo4j.conf
os level
permission is not set