In my program I have multiple databases. One is fixed and cannot be changed, but there are also some others, the so called user databases.
I thought now I have to start for every database one connection and to connect to each data dictionary. How is it possible to connect to more than one database with one connection by handing over the data dictionary filename? Btw. I am using a local server.
thank you very much,
André
P.S.: Okay I might find the answer to my problem.
The Key word is CreateDDLink. The procedure is connecting to another data dictionary, but before a master dictionary has to be set.
Links may be what you are looking for as you indicated in the question. You can use the API or SQL to create a permanent link alias, or you can dynamically create links on the fly.
I would recomend reviewing this specific help file page: Using Tables from Multiple Data Dictionaries
for a permanent alias (using SQL) look at sp_createlink. You can either create the link to authenticate the current user or set up the link to authenticate as a specific user. Then use the link name in your SQL statements.
select * from linkname.tablename
Or dynamically you can use the following which will authenticate the current user:
select * from "..\dir\otherdd.add".table1
However, links are only available to SQL. If you want to use the table directly (i.e. via a TAdsTable component) you will need to create views. See KB 080519-2034. The KB mentions you can't post updates if the SQL statement for the view results in a static cursor, but you can get around that by creating triggers on the view.
Related
I have a connector that selects data from an external database in Bonitasoft, I was able to assign the result of this select to a process variable and display the contents of that variable in the UI Designer in an Input type field, however I need to display this in a table, as I configure This in the UI Designer to display the contents of my process variable in a table, if possible show the step by step thank you. Another thing as I can also pass the contents of my connector to a business variable and then display in UI Designer also if needed
First of all I want to draw your attention that using business or process variables to store data already stored in an external database will lead to duplication.
Recommended solution would be to use a REST API extension (such as REST API SQL data source) to query the data from the external database and use this REST API extension in the forms.
Now if you are ok with duplication of data you can refer to an example I create that demonstrate how to set process and business data with PostgreSQL connector output and then display the variables in form table widgets.
I highly recommend to use business variable and if possible consider migrating the data from the external database to the business data database.
I am looking for solution of logging data changes for public API.
There is a need to tell client app which tables form db has changed and need to be synchronised since the app synchronised last time and also need to be for specific brand and country.
Current Solution:
Version table with class_names of models which is touched from every model on create, delete, touch and save action.
When we are touching version for specific model we also look at the reflected associations and touch them too.
Version model is scoped to brand and country
REST API is responding to a request that includes last_sync_at:timestamp, brand and country
Rails look at Version with given attributes and return class_names of models which were changed since lans_sync_at timestamp.
This solution works but the problem is performance and is also hard to maintenance.
UPDATE 1:
Maybe the simple question is.
What is the best practice how to find out and tell frontend apps when and what needs to be synchronized. In terms of whole concept.
Conditions:
Front end apps needs to download only their own content changes not whole dataset.
Does not invoked synchronization when application from different country or brand needs to be synchronized.
Thank you.
I think that the best solution would be to use redis (or some other key-value store) and save your information there. Writing to redis is much faster than any sql db. You can write some service class that would save the data like:
RegisterTableUpdate.set(table_name, country_id, brand_id, timestamp)
Such call would save given timestamp under key that could look like i.e. table-update-1-1-users, where first number is country id, second number is brand id, followed by table name (or you could use country and brand names if needed). If you would like to find out which tables have changed you would just need to find redis keys with query "table-update-1-1-*", iterate through them and check which are newer than timestamp sent through api.
It is worth to rmember that redis is not as reliable as sql databases. Its reliability depends on configuration so you might want to read redis guidelines and decide if you would like to go for it.
You can take advantage of the fact that ActiveModel automatically logs every time it updates a table row (the 'Updated at' column)
When checking what needs to be updated, select the objects you are interested in and compare their 'Updated at' with the timestamp from the client app
The advantage of this approach is that you don't need to keep an additional table that lists all the updates on models, which should speed things up for the API users and be easier to maintain.
The disadvantage is that you cannot see the changes in data over time, you only know that a change occurred and you can access the latest version. If you need to track changes in data over time efficiently, than I'm afraid you'll have to rework things from the top.
(read last part - this is what you are interested in)
I would recommend that you use the decorator design pattern for changing the client queries. So the client sends a query of what he wants and the server decides what to give him based on the client's last update.
so:
the client sends a query that includes the time it last synched
the server sees the query and takes into account the client's nature (device-country)
the server decorates (changes accordingly) the query to request from the DB only the relevant data, and if that is not possible:
after the data are returned from the database manager they are trimmed to be relevant to where they are going
returns to the client all the new stuff that the client cares about.
I assume that you have a time entered field on your DB entries.
In that case the "decoration" of the query (abstractly) would be just to add something like a "WHERE" clause in your query and state you want data entered after the last update.
Finally, if you want that to be done for many devices/locales/whatever implement a decorator for the query and the result of the query and serve them to your clients as they should be served. (Keep in mind that in contrast with a subclassing approach you will only have to implement one decorator for each device/locale/whatever - not for all combinations!
Hope this helped!
I am looking for a simple way to get data displayed in an issue as just plain text. Basically, I want to be able to type in a lookup id in the issue creation and then once the issue is created, it would call one of our web services to retrieve data connected with that ID.
This wouldn't be coming from another issue tracker, but rather straight from one of my databases.
What would be the easiest way of accomplishing this? I would like the workflow to be: Enter id #, hit save, see the data with that ID displayed in the ticket (Doesn't need to be editable, just displayed in the ticket view).
The easiest way is to create a workflow function that is triggered at Create transition to do the job. There your code can query information from the database and replicate them into JIRA standard and custom fields of the issue itself.
Then you can prevent edition of replicated fields by tuning Edit screen for your issues.
You can also use your function to update field content from time to time, either at transition or in a trigger.
An option is to create some read-only custom fields than query each piece information from the database. It will prevent data replication but it will be probably slow and it does not apply to default fields.
I have an application that has different data sets depending on which company the user has currently selected (dropdown box on sidebar currently used to set a session variable).
My client has expressed a desire to have the ability to work on multiple different data sets from a single browser simultaneously. Hence, sessions no longer cut it.
Googling seems to imply get or post data along with every request is the way, which was my first guess. Is there a better/easier/rails way to achieve this?
You have a few options here, but as you point out, the session system won't work for you since it is global across all instances of the same browser.
The standard approach is to add something to the URL that identifies the context in which to execute. This could be as simple as a prefix like /companyx/users instead of /users where you're fetching the company slug and using that as a scope. Generally you do this by having a controller base class that does this work for you, then inherit from that for all other controllers that will be affected the same way.
Another approach is to move the company identifying component from the URL to the host name. This is common amongst software-as-a-service providers because it makes sharding your application much easier. Instead of myapp.com/companyx/users you'd have companyx.myapp.com/users. This has the advantage of preserving the existing URL structure, and when you have large amounts of data, you can partition your app by customer into different databases without a lot of headache.
The answer you found with tagging all the URLs using a GET token or a POST field is not going to work very well. For one, it's messy, and secondly, a site with every link being a POST is very annoying to work with as it makes navigating with the back-button or forcing a reload troublesome. The reason it has seen use is because out of the box PHP and ASP do not have support routes, so people have had to make do.
You can create a temporary database table, or use a key-value database and store all data you need in it. The uniq key can be used as a window id. Furthermore, you have to add this window id to each link. So you can receive the corresponding data for each browser tab out of the database and store it in the session, object,...
If you have an object, lets say #data, you can store it in the database using Marshal.dump and get it back with Marshal.load.
I use a homebrewn CMS in my site. The texts in it are used by inserting an html-helper into the view:
<%=Html.CmsEntry("About.Title")%>
The entries of the CMS are stored in SQLServer. I need a way to scan all views in my project and see if all tokens are already in the database.
Is there a way to do this? I already enter an entry into the DB at runtime, when a token is not found, but I need a way to do this without visiting each page. Maybe via reflection?
One way to do this is to create a page (controller action) that scans through the files looking for "Html.CmsEntry" and parses out the page names, and then queries the database.
If you have access to the database from your dev machine, you could possible do this in a console app, and set it as a build action, so whenever you compile, it runs.
Failing that, you could try relying on a spider (GoogleBot, or otherwise) to hit all your pages, and trigger your existing logging code.
Alternatively, you could store all your page names as constants or enum values. If you used enum values, you could easily spin through them (using Enum.GetValues) and check they're in the database.
All that said, if the pages are stored in your database, can't you do away with all the static pages that call them, and generate everything dynamically from the content already in the database?