How do I change the timezone of my db2 service on Bluemix - timezone

The title should say it all. I have a service on Bluemix. From what I see there are two regions I can set up an application - US South or UK. Because it was set up quite some time ago when and it grew substantially, I don't want to move my service from US South to UK or change all of my time related queries either. Is there a way I could change the timezone of my DB2 service to match my current timezone / region (Irish Summer Time) without moving the whole application "overseas"?

A DB2 instance (including currently both "SQL Database" and "dashDB" services on Bluemix) takes its timezone setting from the underlying operating system. Unless there is an option to select the timezone when you provision either service (which, in my opinion, there must be), I'm afraid you'll have to move the service physically.

There is no setting for timezone for Bluemix dashDB instances, users, tables or columns.
There is also no information on timezone setting in Web GUI; you must query the DB.
MAJOR FAIL

Related

How do I force Interactive Brokers to connect to their US based servers?

My interactive brokers gateway runs in the cloud in the US. I am a european citizen, so IBKR seems to always connect me to their EU servers, even though my trading system runs in the USA and I am trading US equities.
People say that if you use IBKR you should not worry about speed anyways, but accumulating two times the distance over the atlantic for every api call is just unnecessary.
You write to IB customer support requesting the change of the main data server to New York or Chicago. Warning: some IB support people don't know know anything about this.

I can only select a region but no zone for Google Cloud Run (fully managed) services: which zone should I choose for my Google Cloud SQL server?

I have a fully managed Google Cloud Run service running in Frankfurt. I was not able to choose a zone but only a region, so I took "europe-west3". For my Google Cloud SQL server I can and have to choose a zone. I want to select the same data center for my SQL server and my service to keep the distance short and connections fast but I don't know which zone I should use (a, b, c). Do you know a way to determine which zone fits best to a fully managed Cloud Run Service?
Unfortunetly you cannot choose a Zone to deploy your Cloud Run service, the control goes only until Region. However, this is not something that you should be worried about, as you can see in this documentation:
A zone is a deployment area for Google Cloud resources within a region
That means that even thought the resources might not be in the same Cluster or VM, they are still very close geographically and very likely to be in the same Data Center, and as mentioned in the same documentation:
Locations within regions (Zones) tend to have round-trip network latencies of under <1ms on the 95th percentile.
So you are looking at a very low latency between your resources anyway, to the point that might not even noticible.

WUAPI: List updates for different Windows editions

Using the Windows Update Agent API, one can list Windows updates. E.g.:
Set UpdateSession = CreateObject("Microsoft.Update.Session")
Set UpdateSearcher = UpdateSession.CreateUpdateSearcher()
Set SearchResult = UpdateSearcher.Search("IsInstalled=0 OR IsInstalled=1")
Set Updates = SearchResult.Updates
For I = 0 to SearchResult.Updates.Count-1
Set update = searchResult.Updates.Item(I)
WScript.Echo I + 1 & "> " & update.Title
Next
Since above I am querying for both installed and non-installed updates, I assume the result lists all available updates for my current Windows edition/build. Is that correct?
My point now is: can I query for a different edition too?
For example listing Windows Server 2016 updates from a Windows 10 system.
The idea is to easily provision a developer Windows virtual machine, taking the ISO and the most recent cumulative update.
To resurrect a dead question!
No, the Windows Update client is entirely predicated on the current machine. I've inspected the web traffic in a bid to reverse-engineer the update server traffic, but I got nowhere. YMMV.
The typical approach would be to have a build server create an image from the base ISO, start it up, apply updates, then shut it back down and create a master image from it. You would do this on a regular basis, so that whenever a new VM is provisioned it is no more than x days behind on updates. E.g. you could do it nightly.
Check out a tool called Packer. (This is not an endorsement, just an example.)
If you go down this road, you also open doors for yourself to do things such as run security scans against the image or install convenience packages. The more you can push into an image, the fewer tasks you have in your daily workload.
You mentioned that this is for developer machines. Have you considered dev containers in VS Code?

Setting time zone in service fabric cluster

Is it possible to set the timezone of a service fabric cluster to something other that UTC (CET for example)?
I know it is not the best approach, but if it would be supported it would save my team a lot of time.

Erlang/Elixir on Docker and Hot Code Swap

One of the features of Erlang (and, by definition, Elixir) is that you can do hot code swap. However, this seems to be at odd with Docker, where you would need to stop your instances and restart new ones with new images holding the new code. This essentially seem to be what everyone does.
This being said, I also know that it is possible to use one hidden node to distribute updates to all other nodes over network. Of course, just like that is sounds like asking for trouble, but...
My question would be the following: has anyone tried and achieved with reasonable success to set up a Docker-based infrastructure for Erlang/Elixir that allowed Hot-code swapping? If so, what are the do's, don'ts and caveats?
The story
Imagine a system to handle mobile phone calls or mobile data access (that's what Erlang was created for). There are gateway servers that maintain the user session for the duration of the call, or the data access session (I will call it the session going forward). Those server have an in-memory representation of the session for as long as the session is active (user is connected).
Now there is another system that calculates how much to charge the user for the call or the data transfered (call it PDF - Policy Decision Function). Both systems are connected in such a way that the gateway server creates a handful of TCP connections to PDF and it drops users sessions if those TCP connections go down. The gateway can handle a few hundred thousand customers at a time. Whenever there is an event that the user needs to be charged for (next data transfer, another minute of the call) the gateway notifies PDF about the fact and PDF subtracts a specific amount of money from the user account. When the user account is empty PDF notifies the gateway to disconnect the call (you've run out of money, you need to top up).
Your question
Finally let's talk about your question in this context. We want to upgrade a PDF node and the node is running on Docker. We create a new Docker instance with the new version of the software, but we can't shut down the old version (there are hundreds of thousands of customers in the middle of their call, we can't disconnect them). But we need to move the customers somehow from the old PDF to the new version. So we tell the gateway node to create any new connections with the updated node instead of the old PDF. Customers can be chatty and also some of them may have a long-running data connections (downloading Windows 10 iso) so the whole operation takes 2-3 days to complete. That's how long it can take to upgrade one version of the software to another in case of a critical bug. And there may be dozens of servers like this one, each one handling hundreds thousands of customers.
But what if we used the Erlang release handler instead? We create the relup file with the new version of the software. We test it properly and deploy to PDF nodes. Each node is upgraded in-place - the internal state of the application is converted, the node is running the new version of the software. But most importantly, the TCP connection with the gateway server has not been dropped. So customers happily continue their calls or are downloading the latest Windows iso while we are upgrading the system. All is done in 10 seconds rather than 2-3 days.
The answer
This is an example of a specific system with specific requirements. Docker and Erlang's Release Handling are orthogonal technologies. You can use either or both, it all boils down to the following:
Requirements
Cost
Will you have enough resources to test both approaches predictably and enough patience to teach your Ops team so that they can deploy the system using either method? What if the testing facility cost millions of pounds (because of the required hardware) and can use only one of those two methods at a time (because the test cycle takes days)?
The pragmatic approach might be to deploy the nodes initially using Docker and then upgrade them with Erlang release handler (if you need to use Docker in the first place). Or, if your system doesn't need to be available during the upgrade (as the example PDF system does), you might just opt for always deploying new versions with Docker and forget about release handling. Or you may as well stick with release handler and forget about Docker if you need quick and reliable updates on-the-fly and Docker would be only used for the initial deployment. I hope that helps.

Resources