Why teamsql is so slow to execute sql commands?
I've been using tableplus and teamsql clients and teamsql is to much slow than teable plus.
If "QuerySense" is on for "All" or you're working on a production database (QuerySense is activated automatically), you can turn off "QuerySense" feature, it checks your queries and warns you if there is any DROP, DELETE, ALTER or similar SQL commands.
Since TeamSQL based on NodeJS (and also drivers) there are known performance issues, but drivers in TeamSQL 4.0 (still in beta) will be based on JDBC. You can try the beta version here.
You can turn off QuerySense in Preferences:
Related
I've read through KSQL deployment options here https://www.confluent.jp/blog/deep-dive-ksql-deployment-options/. So it is recommended to use headless KSQL for production deployment.
But I have not found any hints on how I can stop/change queries when in production (headless) mode when KSQL disables interactive access to server via REST/CLI. Does that mean that I need to shut down all KSQL servers in order to add/change one query?
You can deloy headless or interactive into Production, depending on what meets your needs.
Headless is designed to allow you to run a known set of queries in a locked down fashion. This can be a requirement for production systems with strict SLAs, where you don't want someone connecting and kicking off an expensive query or dropping something that causes SLAs to be broken.
As you correctly identify, the Headless deployment mode doesn't allow you to change the DDL of your cluster through a CLI/API. Instead, it would be more normal to have some kind of automation around updating the SQL file and bouncing the cluster. We are aware there is much room for improvement here.
Keep in mind that KSQL does not, at the time of writing, support updating an existing table or stream. However, this is something we're actively working towards. Until that is supported, in general you should only add queries to the file. Any deletions or changes to existing queries would require careful testing as there are many changes KSQL does not currently support. Always ensure changes are thoroughly testing before any prod deployment. Alternatively, some users spin up new clusters when changes need to be made, (hopefully infrequently!). Once caught up, they fail over clients and turn of the old cluster. Again, this is an area in which KSQL will see improvements.
Hope this helps and thanks for using KSQL!
I have an existing system that uses a relational DBMS. I am unable to use a NoSQL database for various internal reasons.
The system is to get some microservices that will be deployed using Kubernetes and Docker with the intention to do rolling upgrades to reduce downtime. The back end data layer will use the existing relational DBMS. The micro services will follow good practice and "own" their data store on the DBMS. The one big issue with this seems to be how to deal with managing the structure of the database across this. I have done my research:
https://blog.philipphauer.de/databases-challenge-continuous-delivery/
http://www.grahambrooks.com/continuous%20delivery/continuous%20deployment/zero%20down-time/2013/08/29/zero-down-time-relational-databases.html
http://blog.dixo.net/2015/02/blue-turquoise-green-deployment/
https://spring.io/blog/2016/05/31/zero-downtime-deployment-with-a-database
https://www.rainforestqa.com/blog/2014-06-27-zero-downtime-database-migrations/
All of the discussions seem to stop around the point of adding/removing columns and data migration. There is no discussion of how to manage stored procedures, views, triggers etc.
The application is written in .NET Full and .NET Core with Entity Framework as the ORM.
Has anyone got any insights on how to do continious delivery using a relational DBMS where it is a full production system? Is it back to the drawing board here? In as much that using a relational DBMS is "too hard" for rolling updates?
PS. Even though this is a continious delivery problem I have also tagged with Kubernetes and Docker as that will be the underlying tech in use for the orchestration/container side of things.
All of the following under the assumption that I understand correctly what you mean by "rolling updates" and what its consequences are.
It has very little (as in : nothing at all) to do with "relational DBMS". Flatfiles holding XML will make you face the exact same problem. Your "rolling update" will inevitably cause (hopefully brief) periods of time during which your server-side components (e.g. the db) must interact with "version 0" as well as with "version -1" of (the client-side components of) your system.
Here "compatibility theory" (*) steps in. A "working system" is a system in which the set of offered services is a superset (perhaps a proper superset) of the set of required services. So backward compatibility is guaranteed if "services offered" is never ever reduced and "services required" is never extended. However, the latter is typically what always happens when the current "version 0" is moved to "-1" and a new "current version 0" is added to the mix. So the conclusion is that "rolling updates" are theoretically doable as long as the "services" offered on server side are only ever extended, and always in such a way as to be, and always remain, a superset of the services required on (any version currently in use on) the client side.
"Services" here is to be interpreted as something very abstract. It might refer to a guarantee to the effect that, say, if column X in this row of this table has value Y then I will find another row in that other table using a key computed such-and-so, and that other row might be guaranteed to have column values satisfying this-or-that condition.
If that "guarantee" is introduced as an expectation (i.e. requirement) on (new version of) client side, you must do something on server side to comply. If that "guarantee" is currently offered but a contradicting guarantee is introduced as an expectation on (new version of) client side, then your rolling update scenario has by definition become inachievable.
(*) http://davidbau.com/archives/2003/12/01/theory_of_compatibility_part_1.html
There are also parts 2 and 3.
I work in an environment that achieves continuous delivery. We use MySQL.
We apply schema changes with minimal interruption by using pt-online-schema-change. One could also use gh-ost.
Adding a column can be done at any time if the application code can work with the extra column in place. For example, it's a good rule to avoid implicit columns like SELECT * or INSERT with no columns-list clause. Dropping a column can be done after the app code no longer references that column. Renaming a column is trickier to do without coordinating an app release, and in this case you may have to do two schema changes, one to add the new column and a later one to drop the old column after the app is known not to reference the old column.
We do upgrades and maintenance on database servers by using redundancy. Every database master has a replica, and the two instances are configured in master-master (circular) replication. So one is active and the other is passive. Applications are allowed to connect only to the active instance. The passive instance can be restarted, upgraded, etc.
We can switch the active instance in under 1 second by changing an internal CNAME, and updating the read_only option in each MySQL instance.
Database connections are terminated during this switch. Apps are required to detect a dropped connection and reconnect to the CNAME. This way the app is always connected to the active MySQL instance, freeing the passive instance for maintenance.
MySQL replication is asynchronous, so an instance can be brought down and back up, and it can resume replicating changes and generally catches up quickly. As long as its master keeps the binary logs needed. If the replica is down for longer than the binary log expiration, then it loses its place and must be reinitialized from a backup of the active instance.
Re comments:
how is the data access code versioned? ie v1 of app talking to v2 of DB?
That's up to each app developer team. I believe most are doing continual releases, not versions.
How are SP's, UDF's, Triggers etc dealt with?
No app is using any of those.
Stored routines in MySQL are really more of a liability than a feature. No support for packages or libraries of routines, no compiler, no debugger, bad scalability, and the SP language is unfamiliar and poorly documented. I don't recommend using stored routines in MySQL, even though it's common in Oracle/Microsoft database development practices.
Triggers are not allowed in our environment, because pt-online-schema-change needs to create its own triggers.
MySQL UDFs are compiled C/C++ code that has to be installed on the database server as a shared library. I have never heard of any company who used UDFs in production with MySQL. There is too a high risk that a bug in your C code could crash the whole MySQL server process. In our environment, app developers are not allowed access to the database servers for SOX compliance reasons, so they wouldn't be able to install UDFs anyway.
I'm working asp.net web based application, I have deployed this application on server, Its getting response on port 80 from a outside client.
I want the to fix the bugs so I want to run this application in Debug mode so that I can attach the worker process with the application and this is making the Performance down and its disturbing the QA team.
So can I have two application one can run in release mode so that QA activity does not get disturbed and parallelly I can debug the build and fix the bugs or can do further development.
I'm facing the same problem during the development activity, If multiple developers are working paralley , only one is able to debug the application other one has to wait.
So please suggest me, If I can get rid of this situation.
I have only one server on which I can test this application.
This is a way too long discussion, but I will try to offer you a few ideas:
Each developer should develop on his own machine (sources and database should be local).
In order to sync your work you should use:
a. a source control solution like TFS or SVN (this is free) for your sources.
b. database changes can easily be synced by generating update scripts using SQL Schema Compare directly from Visual Studio (you will need SQL Server Data Tools for this), Redgate SQL Compare or another application that can compare the database strucure (there are many available online, some of them free).
You should have a separate server (DB and app) to testing.
You should have a separate server (DB anb app) for production.
You say you have one server to test the application. But I suppose each developer has his own computer, right? In this case you need to skip #3 and use the same server for testing and production, but with different databases and applications.
I suggest you check this website for similar answers (see Best practice for test and production environments for example) to find the best solution that applies in your case.
I have an mvc4 application that communicates to my sql server database via a wcf layer. Each layer is co located on the same server with the database located on a different server.
I am seeing CPU issues on my server which holds the applications, in particular with my mvc4 application. The server is windows server 2008 R2¬ running IIS7.5.
I would like to put some performance counters on my server to analyze where the problem on the server may be and is causing the high cpu problems.
I am new to setting up such and looking for pointers as to what counters to set up that may assist me, how I should analyze and best plan in gaining more knowledge on such.
Performance counters are generally good for production monitoring. On dev environnement (and I suppose you are at this stage), there are many profiling tools & apis.
On Sql Server
The best tool is Sql Server Profiler. You can find and diagnose slow-running queries by capturing all Transact-SQL statements and/or Sql Server Events.
On Asp.net MVC
I highly suggest you install a profiler like asp.net mini-profiler or Glimpse. When browsing you website, this will tell you which controller/action/partial/ajax is slow and sometimes why.
Visual Studio includes a Profiler. This let you measure, evaluate, and target performance-related issues in your code. It's fully integrated into the IDE. Once you have ran a performance session, several reports are available to help visualize and detect performance issues from the data gathered.
If you can't find why, you could run a load test using Visual Studio Web & Load Tests. You will rarely have performance issues for a single user, but for many concurrent users it's not generally the case.
I have some code that uses an Informix 11.5 database that I want to run some tests against.
If the tests fail they will often leave the database in an inconsistent state that needs to be manually resolved before the tests can be run again.
I would like to automate this so that the tests do not require manual intervention before running the tests again.
My current solution is to write some code that does the cleanup, but this means the code must be maintained whenever potential new inconsistent states can occur in new features.
The code runs a lot of stored procedures, which themselves often use transactions. As Informix does not support nested transactions I can't just wrap up all the work in one big transaction.
Is there another way to create a checkpoint which I can restore the database back to?
You could create a virtual machine with an undo disk and after you run the test you can close the virtual machine without saving the changes. It's equivalent to like you never ran the tests!
If this is a development only server, how about taking a Level 0 ontape system archive before the test? I think this can be done via the sysadmin functions too (not sure though), so it can be automated. After the tests you just restore the archive.
Changing database state - and resetting it back to a known state - is one of the reasons that the Unit Test community spends time and effort avoiding testing against databases. It is a tough problem.
Informix 11.50 does support savepoints; however, it does not support one BEGIN WORK after another without an intervening COMMIT or ROLLBACK.
To the extent possible, have the tests create and load a set of tables with the known data. One way of achieving that is to create a whole new database for the test. However, this is only borderline feasible if you need to test with high volumes of data.
I don't think this issue is in any way unique to Informix - it is a general problem with testing DBMS operations.