how to Add a database on the server to be accessible by all computers on the server - sqlplus

I made a sample database for my students to learn SQL
in that i created it and saved it
i added 30 entries to it
and i saved it
and i cannot copy the same file to 100 computers in my lab
so tell me how to do this
i searched the net but to no avail
sql> tables
-----------------------------------
dhana
-----------------------------------
task completed in 0.57 seconds
i want to put the same database in 100 computers but i cannot do it it will take long time to open the windows xp computer and copy the file from the network paste it and shut down the computer is too tedious

hmm.
refer the computer science for class 11 with python by Sumita Arora
and u may get it.
u are not searching the net properly
and the questions are kind of easy to be found on google
what is ur webbrowser

You can use something like mysql workbench and access your server remotely, although I've never tried 100 simultaneous connections. Another option is ssh from clients into your server and use database cli. Of course i assume you want one database and many clients. Not many databases.

Related

Neo4J online backup — any way to address the security flaw?

If I am to make an online backup using the neo4j-admin backup tool remotely, as is advised by Neo4J, I have to open a public IP and the backup port on my Neo4J application.
However, I don't see neo4j-admin asking for any login credentials, basically making it possible for anybody to access the server and copy all the data while the port is opened.
There is no setting inside the neo4j.conf that would only accept backup requests from a certain address.
So what does it mean? When the online backups are done remotely, as is advised, the database may be vulnerable to somebody else just copying all the data.
I didn't find anything in Neo4J documentation that addresses this flaw (only a warning) and it looks like in more than 7 years that this feature has been available as a part of the commercial enterprise version there has not been any solution offered for this.
What do you do to protect the DB then? At the moment the only solution seems to not back it up remotely, but that causes additional stress on the server and is not the best solution. Plus the online backup is not stable when done locally for large DBs. Another solution could be to only open the port remotely via some kind of API to the server, but that may still be exploited if somebody figures out the time frame when the backup is made.
The documentation states that ne04j-admin must be invoked as the neo4j user. That is the user that owns the neo4j executables and the databases. So the security is handled by the OS login and the file permissions should be set to prevent unathorised access to the neo4j directories/files including the neo4j-admin executable.

Which is the best way to get real time data from avaya cms server?

I am sorry if this question goes out of topic but i forced to ask here as there is very limited resources found over the net on this.
I am looking to implement system to get real time data from avaya cms server I did lot of RND on JTAPI but it has got some limitations it is not giving all events all data as stored in CMS database. I also tried connecting cms database using Java but no success because it also give historical data in delay of 30 mins.
Is it possible to get the same technically using JTAPI,TAPI anything. Or is there anyone who have used any paid tool by avaya which is cheaper and can solve this purpose.
I saw clint but don't intend to use. Please let me know the ways if anyone had done this.
Your CMS may provide a feature known to me as realtime socket. It is a service pushing data about skills/splits, vdns and vectors over a network socket.
It is virtually the same what you'll find in hsplit and so on but realtime.
Pushed data can be configured by your cms admin.
If you are looking for call data you may take a look at *call_rec* table in cms.
You can use clintSVR which is a high level tool based on CMS CLINT. By using clintSVR, you can use CGI, OCX and C++ interfaces to get the real time data from CMS.
As others have said you can get this from realtime reports. You'll need to scrape them.
RT socket is just a set of wrappers around clint for running reports. It takes the realtime report data and sends to to a socket.
You can roll your own real time reports with clint and feed that to whatever needs to ship the data. A sample realtime report can be run from the command line like:
/cms/toolsbin/clint -u your_user <<EXECUTE_DONE
do menu 0 "cu:rea:Meas"
do "Run"
do "Exit"
EXECUTE_DONE
Here is an example of running a report directly
Run report directly:
/cms/toolsbin/clint -u ini <<EXECUTE_DONE
clear
run gem "r_custom/cr_r_3"
do "Run"
do "Exit"
EXECUTE_DONE

Cypher Queries work in local machine but does not work in server . Also query works good in embedded mode but not in Rest mode

The cypher queries that work fine in local machine (Windows) do not work in linux instance. The cypher queries also run great in embedded mode in the server/local but the same query does not work using the Rest mode (0 rows returned). The database size between local and server is hugely different, so are there any parameters we need to change in order of accomodate this difference in db size ?
I get a
com.sun.jersey.api.client.ClientHandlerException:
java.net.SocketTimeoutException: Read timed out
Example Queries are simple queries like: match n where n:LABEL_BRANDS return n .
The properties in neo4j.properties file are:
neostore.nodestore.db.mapped_memory=25M
neostore.relationshipstore.db.mapped_memory=50M
neostore.propertystore.db.mapped_memory=90M
neostore.propertystore.db.strings.mapped_memory=130M
neostore.propertystore.db.arrays.mapped_memory=130M
Neo4J Version I use 2.0.0-RC1 .
Also I get a "Disconnected from Neo4j. Please check if the chord is unplugged" error on opening the browser interface very frequently.
Would there be a mistake in setting some properties in config files, could you identify the mistake here. Thanks .
Upgrade to Neo4j 2.0
How big is the machine you run your Neo4j server on?
Try to configure a sensible amount of heap (8-16GB) and the rest of your RAM as memory-mapping according to the store-file sizes on disk
the query you've shown is a global scan which will return a lot of data over the wire on a large database. What are the actual graph queries/use-cases you want to run?
The error messages from the browsers also indicate that either your network setup is flaky or your server has issues. Please do upload your messages.log as Stefan has indicated to an accessible place and add the link to your question.

offline web application design recommendation

I want to know which is the best architecture to adopt for this case :
I have many shops that connect to a web application developed using Ruby on Rails.
internet is not reachable all the time
The solution was to develop an offline system which requires installing a local copy of the distant database.
All this wad already developed.
Now what I want to do :
Work always on the local copy of the database.
Any change on the local database should be synchronized with distant database.
All the local copies should have the same data in other local copies.
To resolve this problem I thought about using a JMS like software eventually Rabbit MQ.
This consists on pushing any sql request into a JMS queue that will be executed on the distant instance of the application which will insert into the distant DB and push the insert or SQL statement into another queue that will be read by all the local instances. This seems complicated and should slow down the application.
Is there a design or recommendation that I must apply to resolve this kind of problem ?
You can do that but essentially you are developing your own replication engine. Those things can be a bit tricky to get right (what happens if m1 and m3 are executed on replica r1, but m2 isn't?) I wouldn't want to develop something like that unless you are sure you have the resources to make it work.
I would look into existing off-the shelf replication solution. If you are already using a SQL DB it probably has some support for it. Look here for more details if you are using MySQL
Alternatively, if you are willing to explore other backends, I heard that CouchDB has great support for replication. I also heard of people using git libraries to do that sort of thing.
Update: After your comment, I realize you already use MySql replication and are looking for solution for re-syncing the databases after being offline.
Even in that case RabbitMQ doesn't help you at all since it requires constant connection to work, so you are back to square one. Easiest solution would be to just write all the changes (SQL commands) into a text file at a remote location, then when you get connection back copy that file (scp, ftp, emaill or whatever) to master server, run all the commands there and then just resync all the replicas.
Depending on your specific project you may also need to make sure there are no conflicts when running commands from different remote location but there is no general technical solution to this. Again, depending on the project, you may want to cancel one of the transactions, notify the users that it happened and so on.
I would recommend taking a look at CouchDB. It's a non-SQL database that does exactly what you are describing automatically. It's used especially in phone applications that often don't have internet or data connectivity. The idea is that you have a local copy of a CouchDB database and one or more remote CouchDB databases. The CouchDB server then takes care of teh replication of the distributed systems and you always work off your local database. This approach is nice because you don't have to build your own distributed replication engine. For more details I would take a look at the 'Distributed Updates and Replication' section of their documentation.

A way to synchronize data between an external device and a database?

In the application I am designing, I have to communicate with a device and store a history of data readings in a database. The device is essentially a sensor that spits out numbers via the serial port. The user end of the application is a RubyOnRails interface that allows the user to view this data and configure the device.
I am wondering what kind of connection between the database and the device you could recommend for this kind of a setup.
Up to this point, I had a custom application running on a host computer (a computer with the device connected directly through a serial port) that would serve as a bridge to a MySQL database. The application would connect directly to the MySQL database and execute queries. It works fairly well, but I am not sure if this is the best solution.
The only other alternative I see is to have an intermediate application that my custom application could connect to, instead of directly going to the database. This could be a part of the main application, or something separate. Would this be a better solution?
Would you recommend another approach?
Thank you,
I have a similar structure, although I fetch my data from a Web Service. The way I organize is:
Create classes in lib/imports, eg DailyDataImport, DailyDataSummarize (you can organize the hierarchy and names as per your wish or willingness).
Create a rake task under a new namespace, say import and add it to your cron job depending frequency. Take a look at Cron in Ruby. Its helpful.
This allows me to have a better control over what goes in my database.
Some questions to consider:
What schedule does the Device follow
to populate the data?
Do you need the data as-is or you
want a little control over it or you
need to process it, like summarizing
and aggregating etc.
MS SQL Server 2008 has great data synchronisation support.
SQL Server 2008 Express is free and can act as a replication subscriber (but not publisher) for clients.
Microsoft Sync Framework

Resources