I am using InfluxDB and caught up with interesting situation. I have created around 20 plus instances of D.B. as shown below.
Now, some of the Database names are not showing as they are way down below in the drop down list. I did apply zoom in, reducing font size, giving name of database so that they can come up at the staring of the list, but these all are temporary solutions. Wondering how to scroll database list or select specific database using some InfluxDB command?
InfluxDB version i am using is V 1.2.2
You can do either of these:
Issue a SHOW DATABASES query to gather the list of your databases
Prefix with the database name, e.g.
SELECT * from DATABASENAME..MEASUREMENTNAME
to query a specific database
Related
I don't know why but I can not see the stored procedures appear when I connect the database to Tableau (I use MariaDB). I can only see the data tables.
Anyone has the same problems with me? I am a newbie so I am not sure if my description is clear or not.
Use the stored procedures.
I found that Tableau does not connect to stored processes and that one way around this is that when you connect to your server, you should use the initial query function. Once you log in, grab Custom SQL and for that script simply use
select * from #nameoftemptable
and Execute.
I have a small social website on rails (planing to port to phoenix) that use react on view and backend is just a JSON API,
with more o less 3000 users online at any moment. It runs with postgres/memcached
When user, for example, visits its feed page, I do:
Select activities from database (20 per page)
Select last 4 comments from each activity from database (justo 1 select)
Select all users referenced by activity or comment from database (select users.* from users where id in (1,3,4,5,...100) )
I have a cache layer (memcached) that when I will load users, first I try load from memcached, if it not exists I read from database and put it on cache.
BUT I also have some "listenners" on users model (and over others referenced models like address and profile) to invalidate cache if any field change.
The problem:
This cache demand a lot of code.
Sometimes cache run out of sync.
I hate to have this listeners and they are "side effects"
My question is: Any one is doing something like that??
I search A LOT over all google about cache layer to json api and looks like that everyone is just using database directly.
I know that Rails has it own soluction (and I gess that phoenix dont has one), but it always end up using update_at column, that means, I have to go to database anyway.
alternative:
Live with date, life is not pretty
Buy a more powerful postgres instance... any one is using memcached like that.
Remover listeners, put some expires_in (1 or 2 minutos... or more) and let app
show out of sync data for a couple of minutes.
thanks for any help!
so I developed a small Neo4j database with the aim of providing users with path-related information (shortest path from A to B and properties of individual sections of the path). My programming skills are very basic, but I want to make the database very user-friendly.
Basically, I would like to have a screen where users can choose start location and end location from dropdown lists, click a button, and the results (shortest path, distance of the path, properties of the path segments) will appear. For example, if this database had been made in MS Access, I would have made a form, where users could choose the locations, then click a control button which would have executed a query and produced results on a nice report.
Please note that all the nodes, relationships and queries are already in place. All I am looking for are some tips regarding the most user-friendly way of making the information accessible to the users.
Currently, all I can do is make the users install neo4j, run neo4j every time they need it, open the browser, run the cypher script and then edit the cypher script (write down strings as locations) and then execute the query. This makes it rather impractical for users and also I am worried that some user might corrupt the data,
I'd suggest making a web application using a web framework like Rails, especially if you're new to programming. You can use the neo4j gem for that to connect to your database and create models to access the data in a friendly way:
https://github.com/neo4jrb/neo4j
I'm one of the maintainers of that gem, so feel free to contact us if you have any questions:
neo4jrb#googlegroups.com
http://twitter.com/neo4jrb
Also, you might be interested in look at my newest project called meta model:
https://github.com/neo4jrb/meta_model
It's a Rails app that lets you define via the web app UI your database model (or at least part of it) and then browse/edit the objects via the web app. It's still very much preliminary, but I'd like to be able to things like what you're talking about (letting users examing data and the relationships between them in a user friendly way)
I general you would write an tiny (web/desktop/forms-)application that contains the form, takes the form values and issues the cypher requests with the form values as parameters.
The results can then be rendered as a table or chart or whatever.
You could even run this from Excel or Access with a Macro (using the Neo4j http endpoint).
Depending on your programming skills (which programming language can you write in) it can be anything. There is also a Neo4j .Net client (see http://neo4j.com/developer/dotnet).
And it's author Tatham Oddie showed a while ago how to do that with Excel
I need to extract data from an Access database (all one table, like a flat file) and translate this into a relational database. Seems like this should be possible to do in Rails. Can anyone tell me where I could host both on the same site? I have almost have this working on my desktop, but would like to put it online in order to better deploy it.
BTW, the data cannot be extracted from this Access table into a CSV because, for whatever reason, Access does not allow CSV files greater than around 63K rows, and this table is bigger.
TIA,
--Rick
In my program I have multiple databases. One is fixed and cannot be changed, but there are also some others, the so called user databases.
I thought now I have to start for every database one connection and to connect to each data dictionary. How is it possible to connect to more than one database with one connection by handing over the data dictionary filename? Btw. I am using a local server.
thank you very much,
André
P.S.: Okay I might find the answer to my problem.
The Key word is CreateDDLink. The procedure is connecting to another data dictionary, but before a master dictionary has to be set.
Links may be what you are looking for as you indicated in the question. You can use the API or SQL to create a permanent link alias, or you can dynamically create links on the fly.
I would recomend reviewing this specific help file page: Using Tables from Multiple Data Dictionaries
for a permanent alias (using SQL) look at sp_createlink. You can either create the link to authenticate the current user or set up the link to authenticate as a specific user. Then use the link name in your SQL statements.
select * from linkname.tablename
Or dynamically you can use the following which will authenticate the current user:
select * from "..\dir\otherdd.add".table1
However, links are only available to SQL. If you want to use the table directly (i.e. via a TAdsTable component) you will need to create views. See KB 080519-2034. The KB mentions you can't post updates if the SQL statement for the view results in a static cursor, but you can get around that by creating triggers on the view.