Cytoscape - Name Column Empty - Needed for Selecting Nodes with ID List - cytoscape

I have imported a network to Cytoscape, and under the list of 'Nodes' there is detail in a column called 'Shared Name' but nothing under the column 'Name'.
I want to Select nodes based on a list of IDs. When I tried at first, nothing happened. When I copied a few of the names into the 'Name' column, it then did work (I believe this is the column it refers to when you select by IDs).
However, I'm trying to select 300 nodes, so don't want to go through and copy and paste each 'Shared Name' into 'Name'. Is there an easy way to copy and paste the whole column? Or is there a way to select nodes using IDs and refer to the Shared Names column instead?
Any help would be greatly appreciated, thank you in advance!

I'm not sure how you were able to import a network without a name column. On initial import, the name and shared name columns should be the same. In any case, you could just add an equation to the name column that sets everything to the shared name column. To do this, click on a cell in the Name column, click on the formula button "f(x)", and just select "shared name" from the list of columns, then click insert. Set it to "Apply to: " entire column, and then "Evaluate and insert result". Now, your "Name" column is a duplicate of your "Shared name" column.
-- scooter

Related

Using a subquery as a parameter

I am using Google Sheets and have a connected query where I am using parameters. When one of the parameters is configured to be a subquery, the query will run, but no results are returned.
For example, here is my (simplified) query:
SELECT *
FROM table
WHERE campaign IN (#CAMPAIGN);
In this example, I have the #CAMPAIGN parameter in the Google Sheet configured as:
SELECT DISTINCT campaign FROM table2
If I manually substitute the parameter in the BQ console, it runs fine and returns the expected results. Is there a reason this functionality does not work with parameter substitution in the Google Sheet? Is there a way around this?
Depending on how much SQL SELECT type lookups you do, it may help to use a #customfunction that I wrote. You need to place my SQL .js in your Google sheets project and the =gsSQL() custom function will be available.
The one requirement for this versus using =QUERY() is that unique column titles are required for each column.
It is available on github:
gsSQL github project
This example works if each sheet is a table, so it would be entered something like
=gsSQL("SELECT books.id, books.title, books.author_id
FROM books
WHERE books.author_id IN (SELECT id from authors)
ORDER BY books.title")
In this example, I have a sheet named 'books' and another sheet named 'authors'.
If you need to specify a named range or an A1 notation range as a table, this can also be done with a little more work...
=gsSQL("SELECT books.id, books.title, books.author_id
FROM books
WHERE books.author_id IN (SELECT id from authors)
ORDER BY books.title", {{'books', 'books!$A$1:$I', 60};
{'authors', 'authors!$A$1:$J30', 60}}, true)
In this example, the books and authors come from specific ranges, the data will be cached for 60 seconds and column titles are output.

How to insert CDC Data from a stream to another table with dynamic column names

I have a Snowflake stored procedure and I want to use "insert into" without hard coding column names.
INSERT INTO MEMBERS_TARGET (ID, NAME)
SELECT ID, NAME
FROM MEMBERS_STREAM;
This is what I have and column names are hardcoded. The query should copy data from MEMBERS_STREAM to MEMBERS_TARGET. The stream has more columns such as
METADATA$ACTION | METADATA$ISUPDATE | METADATA$ROW_ID
which I am not intending to copy.
I don't know of a way to not copy the METADATA columns if not hardcoding. However if you don't want the data maybe the easiest thing to do is to add them to your target, INSERT using a SELECT * and later in the sp set them to NULL.
Alternatively, earlier in your sp, run an ALTER TABLE ADD COLUMN to add the columns, INSERT using SELECT * and then after that run an ALTER TABLE DROP COLUMN to remove the columns? That way your table structure stays the same, albeit briefly it will have some extra columns.
A SELECT * is not usually recommended but it's the easiest alternative I can think of

How can I update a single field from the results of a select with a join

I wish to change a flag from Y to N. I have been given a select which gives me two records for which I would like to change the value.
As a newbie to anything more advanced than the basics, I am totally at a loss.
As this is a live table, I would am too cautious to attempt this without fully understanding that I will update the correct records.
SELECT a.*, '||' , b.* FROM basecode b, type t
WHERE b.b_id IN ('Val1', 'Val2', 'Val3')
AND b.btype_id = t.ttype_id
The resultant code gives me records with multiple fields, but I just wish to change just a couple of records flag fields from 'Y' to 'N'
Stripping away the other fields I have
iflag='Y'
oflag='Y'
And just want them to be set to 'N' from the previous select.

Data normalization / Searching across multiple fields

have some denormalized data, along the lines of the following:
FruitData:
LOAD * INLINE [
ID,ColumnA, ColumnB, ColumnC
1,'Apple','Pear','Banana'
2,'Banana','Mango','Strawberry'
3,'Pear','Strawberry','Kiwi'
];
MasterFruits
LOAD * INLINE [
Fruitname
'Apple'
'Banana'
'Pear'
'Mango'
'Kiwi'
'Strawberry'
'Papaya'
];
And what I need to do is compare these fields to a master list of fruit (held in another table). This would mean that if I chose Banana, IDs 1 and 2 would come up and if I chose Strawberry, IDs 2 and 3 would come up.
Is there any way I can create a listbox that searches across all 3 fields at once?
A list box is just a mechanism to allow you to "select" a value in a certain field as a filter. The real magic behind what Qlikview is doing comes from the associations made in the data model. Since your tables have no common field you couldn't, for example, load a List Box for Fruitname and click something and have it alter List Boxes for other fields such as ColumnA, B, or C. To get the behavior you want you need to associate the two tables. This is can be accomplished by concatenating the various columns into one column (essentially normalizing the data).
[LinkTable]:
LOAD Distinct ColumnA as Fruitname,
ID
Resident FruitData;
Concatenate([LinkTable])
LOAD Distinct ColumnB as Fruitname,
ID
Resident FruitData;
Concatenate([LinkTable])
LOAD Distinct ColumnC as Fruitname,
ID
Resident FruitData;
You can see the table this produces here:
and the data model looks like this:
and finally, the desired behavior:

Adding unique two-column index with already not unique data

I have a rails app and need to add a unique constraint, so that a :record never has the same (:user, :hour) combination.
I imagine the best way to do this is by adding a unique index:
add_index :records, [:user_id, :hour], :unique => true
The problem is, the migration I wrote to do that fails, because my database already has non-unique combinations. How do I find those combinations?
This answer suggests "check with GROUP BY and COUNT" but I'm a total newbie, and I would love some help interpreting that.
Do I write a helper method to do that? Where in my app would that go?
It's too complex to do it in the console, right?
Or should I be looking at some sort of a script?
Thank you!
Run this query in your database console: SELECT *, COUNT(*) as n FROM table_name group by column_name HAVING n>1
Fix the duplicate rows
Re-run your migration
IMHO, you should edit your duplicate data manually so that you can be sure the data is correctly fixed.
Update:
OP didn't mention he/she is using Postgres and I gave a solution for MySQL.
For Postgres:
Based on this solution: Find duplicate rows with PostgreSQL
Run this query:
SELECT * FROM (
SELECT id,
ROW_NUMBER() OVER(PARTITION BY merchant_Id, url ORDER BY id asc) AS Row
FROM Photos
) dups
WHERE
dups.Row > 1
More explanation:
In order for you to execute the migration and add unique constraint to your columns, you need to fix the current data first. Usually, there's no automatic step for this in order to make sure you won't end up with incorrect data.
That's why you need to manually find the duplicate rows and fix it. The given query will show you which rows are duplicates. So, from there, fix the data and you should be able to run the migration.
Mooore update:
The duplicated rows do not get marked. For an example, if you get this kindo of result:
ID ROW
235 2
236 3
2 2
3 3
You should select the row with id=235 and then select every row with the same column value as id=235. From there, you'll see every id which are duplicates from id=235. Then, just edit them one by one.

Resources