Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have 5000+ airport codes (with latitude, longitude) in a table and timezone list is available in a master table with these timezones: http://support.microsoft.com/kb/973627.
How can I map the timezone to an airport code? For same offset we can have multiple time zones. Example: GMT-06:00 has multiple timezone names.
The mapping list available on websites do not have the same timezone names which are there in the link.
This isn't something that SQL Server can do natively, you will have to map lat/lon to timezone yourself, see http://timezonedb.com/ for an opensource dataset that might help you.
NOTE The list of Windows Timezones for Windows XP is invalid. It has changed several times in the past, the latest only a few months ago. In any case, Windows timezone names aren't used in travel, or anywhere else for that matter.
GMT-6 isn't a timezone, it's an offset. Timezones are those found in the IANA database, eg. Russia\Moscow. Depending on time of year the offset changes. Actually, Russian time zones have changed many times over the last decade so you need to know the full date to calculate the offset.
The IANA database contains all timezones and the rules to calculate their offsets back to the 19th century.
Your airport reference table should contain the airport code and the timezone, resolving the offsets in client code. Storing the offsets isn't practical. Not only do they change, but there are no fixed dates to mark the change from winter to summer time.
I understand this is a bit of pain when you want to calculate travel durations. If the number of calculations isn't very large, you can do everything client-side. The IANA database isn't large and is often embedded in libraries, which means each calculation costs a few CPU cycles. Another option is to create a SQLCLR assembly to do the translation on the server. Even big travel agencies (as in Top 10 in Europe) don't need that though.
In Linux systems the IANA timezone database is part of the OS and you can resolve offsets with system calls. In Windows, you can use a library like NodaTime to resolve timezones to offsets.
The IANA database is updated regularly. Libraries that embed it have to be recompiled with the new data, which means you have to update whichever you use as well.
All airport reference data eg from Flightstats or other providers always contain the timezone.
If you care about airport timezones, you should avoid using datetime in SQL Server and use datetimeoffset wherever possible. Rather than assume that all travel dates are local dates (thus requiring airport lookups and conversions), be explicit. All airlines post their schedules with valid offsets anyway. This can make duration calculations a lot easier.
As for matching airports to timezones
This process can't be fully automated, because all datasets (even commercial offerings) have omissions. Even if you buy a commercial service, there will be cases where the airport is missing and you have to google for it by city name etc. This can happen for example if an airport hasn't opened yet but airlines have started selling flights from/to it.
Matching coordinates to timezones isn't very helpful. It's easier to match the airport codes themselves. There are several services that offer web service or REST endpoints which you can call to request airport information. Understandably, the free/open source services are less reliable than the commercial offerings.
As a starting point, you should find some services, query them for the airports you already have and keep the information, updating it whenever a new airport comes up. Expect to spend some time cleaning up the data though.
Consider subscribing to a commercial service - they have adopted a per-use model that costs very little if you only have 5000 calls to make. They also provide a lot of information like coordinates, locations, performance informations etc.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Using Delphi XE2: I have used AbsoluteDB for years with good success for smallish needs, but it does not scale well to large datasets. I need recommendations for a DB engine for the following conditions:
Large (multi-gigabyte) dataset, few tables with lots of small records. This is industrial-equipment historical data; most tables have new records written once a minute with a device ID, date, time and status; a couple tables have these records w/ a single data point per record, three others have 10 to 28 data points per record depending on the device type. One of the single-data-point tables adds events asynchronously and might have a dozen or more entries per minute. All of this has to be kept available for up to a year. Data is usually accessed by device ID(s) and date window.
Multi-user. A system service retrieves the data from the devices, but the trending display is a separate application and may be on a separate computer.
Fast. Able to pull any 48-hour cluster of data in at most a half-dozen seconds.
Not embedded.
Single-file if possible. Critical that backups and restores can by done programatically. Replication support would be nice but not required.
Can be integrated into our existing InstallAware packages, without user intervention in the install process.
Critical: no per-install licenses. I'm fine with buying per-seat developer licenses, but we're an industrial-equipment company, not a software company - we're not set up for keeping track of that sort of thing.
Thanks in advance!
I would use
either PostgreSQL (more proven than MySQL for such huge data)
or MongoDB
The main criteria is what you would do with the data. And you did not say much about that. Would you do individual queries by data point? Would you need to do aggregates (sum/average...) over data points per type? If "Data is usually accessed by device ID(s) and date window", then I would perhaps not store the data in individual rows, one row per data point, but gather data within "aggregates", i.e. objects or arrays stored in a "document" column.
You may store those aggregates as BLOB, but it may be not efficient. Both PostgreSQL and MongoDB have powerful objects and arrays functions, including indexes within the documents.
Don't start from the DB, but start from your logic: which data are you gathering? how is it acquired? how is it later on accessed? Then design high level objects, and let your DB store your objects in an efficient way.
Also consider the CQRS pattern: it is a good idea to store your data in several places, in several layouts, and make a clear distinction between writes (Commands) and reads (Queries). For instance, you may send all individual data points in a database, but gather the information, in a ready-to-use form, in other databases. Don't hesitate to duplicate the data! Don't rely on a single-database-centric approach! This is IMHO the key for fast queries - and what all BigData companies do.
Our Open Source mORMot framework is ideal for such process. I'm currently working on a project gathering information in real time from thousands of remote devices connected via Internet (alarm panels, in fact), then consolidating this data in a farm of servers. We use SQLite3 for local storage on each node (up to some GB), and consolidate the data in MongoDB servers. All the logic is written in high-level Delphi objects, and the framework does all the need plumbing (including real-time replication, and callbacks).
I have been searching the web for some simple world clock program that will run in the background without the use of internet connection. Instead I found this site using Google search: http://wwp.greenwichmeantime.com/info/global.htm
Now I would like some opinions on which language would fit best to the program I would like. Basically it will look like a simple 2 column table. Column 1 will be the time zone and column 2 as current time according its respective zones. It will just continuously run in the background while not consuming too much memory. The local time of the computer will be used as the base.
Again this will be an OFFLINE world time zone, so a desktop-application.
Features may include these as well but not necessary:
The time will be at 24 hour format to make it easier to read (well I find it easier and might be necessary).
Transparency setting.
Always on top.
I have learned some basic programming languages but don't know how to make this project in actuality since I haven't touched on the "visual" parts of the languages. I've seen some applications built on java but I haven't learnt it yet. I know basic python and basic C++.
Here's what I think it will look like: http://imgur.com/8ZdisWX
Just make it longer vertically so every time zone will fit.
First some notes. The page you link to show 24 timezones. This is for all practical use a lie. There are isn't 24 time zones, there are hundreds. 408, to be specific, of which several have more than one name, so there are 548 different names. http://en.wikipedia.org/wiki/List_of_tz_database_time_zones If you are on Windows, you might want to use Windows internal registry of time zones. It has fewer, but still hundreds of timezones.
Listing all of those is not practical. You will need a way for the user to select what time zones he wants to display. Many operating systems time/date widgets allow this already.
If you still want to do this application, then you can do this in any language whatsoever, more or less. Since you know basic Python I'd recommend you do it in Python, using pytz as the time zone library. As for GUI support, Kivy looks cool, and is very portable, even supporting iOS and Android.
Also note that time zones change often, so that you probably need to update your application a couple of times per year, just to get the latest time zone definitions. You can get around that in various more or less complicated ways. But I'd leave that for overcourse at the moment.
Normally, Rails stores all times in the database in UTC time. If you set your time zone to be something else, it converts automatically between that zone and UTC when you save to the database or retrieve from it.
What are some of the advantages of this approach? Are there any disadvantages? Is there any way to have Rails use a different time zone?
I think some of the advantages may be:
UTC removes the ambiguities of seasonal time changes
You can present different time zones to different users while keeping things consistent in the database
The only disadvantage I can think of is that, for an internal app where all users are actually in the same time zone, this difference makes it harder to run raw SQL queries based on local time.
This question has a little bit of a religious feel to it, but I'm going to answer it based on my personal experience.
Always store dates in an unambiguous form. Storing the date in UTC is pretty much the standard in that regard.
Advantages:
You can do simple math on date-times in the database without needing to convert them.
You can adjust the display of the dates at the presentation layer
Web applications can use a little bit of javascript to display local time
Disadvantages:
You need to convert all the times into some 'local' time on display
Localtime <-> UTC conversions incur a small processing penalty
Can you get rails to do something different? Possibly, but I've never tried as it just was too much work to fight what IMHO was a sensible design.
Does it make sense to use UTC from a 'just use my timezone' sense? Yes. Your server could be in California, your users in New York and who decides what is local time in that case. The server? The users? Mike, who just happens to be in London for the week on a business trip? In this case what timezone do you use?
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm finding it difficult to find any discussion on best practices for dealing with multiple currencies. Can anyone provide some insight or links to help?
I understand there are a number of ways to do this - either transactionally where you store the value entered as is, or functionally where you convert to a base rate. In both cases the exchange rate is needed to be stored that covers that transactions time for each currency that it may need to be converted to in the future.
I like the flexibility of the transactional approach, which allows old exchange rate info to be entered at a later date, but probably has more overhead (as you have to store more exchange rate data) than the functional approach.
Performance & Scalability are major factors. We have (all .net) a win & web client, a reports suite and a set of web services that provide functionality to a database back-end. I can cache the exchange rate information somewhere (e.g. on client) if required.
EDIT: I would really like links to some documents, or answers that include 'gotchas' from previous experience.
I couldn't find any definitive discussion, so I post my findings, I hope it helps someone.
The currency table should include the culture code to make use of any Globalisation Classes.
Transactional Method
Store in currency local to customer and store multiple conversion rates for the transaction currency that applied when the transaction occurred.
Requires multiple exchange rates for each currency
Site Settings table would store the input currency
Input & Output of values at client level would have no overhead as it can be assumed the value is in the correct currency
To apply exchange rates, you would need to know the currency of the entered values (which may be different for cross client reports), then multiply this by its associated entity exchange rate that was valid during the transactions time period.
Functional Method
Store in one base currency, hold conversion rates for this currency that apply over time
Consideration needs to be given at point between front end and database is the best place to convert values
Input performance is marginally affected as a conversion to the base currency would need to take place. Exchange rate could be cached on the client (note each entity may use a different exchange rate)
This required one set of exchange rates (from base to all other required currencies)
To apply exchange rates, every transaction would need to be converted between the base and required currencies
Composite
At point of transaction, store transactional value and functional value, that way no exchange rate information would need to be stored. (This would not be suitable a solution as it effectively restricts you to two currencies for any given value)
Comparison
Realistically, you have to choose between function and transactional methods. Both have their advantages & disadvantages.
Functional method does not need to store local currency for transaction, needs to convert current db values to base currency, only needs one set of exchange rates, is slightly harder to implement and maintain though requires less storage.
Transactions method is much more flexible, though it does require more exchange rate information to be held and each transaction needs to be associated with an input currency (though this can be applied to a group of customers rather than each transaction). It would generally not affect code already in production as local currencies would still be used at the local level making this solution easy to implement and maintain. Though obviously any reports or values needing to be converted to a different currency would be affected.
In both cases, each transaction would need exchange rates for the time of transaction for each currency it needs converting to – this is needed at point of transaction for functional method, however the transactional method allows more flexibility as past exchange rate data could be entered at any time (allowing any currency to be used),
i.e. you lose the ability to use other exchange rates in the functional method.
Conclusion
A transactional method of currency management would provide a flexible approach, avoiding any negative impact on client performance and zero client code modification. A negative performance impact would likely occur in reports where all will need rework if different currencies are required.
Each client site will need to store a currency reference that states what their input currency is.
It should be possible to get away with storing exchange rates at a high level (e.g. a group of customer sites etc), this will minimise the amount of data stored. Problems may occur if exchange rate information is required at a lower level.
There is no single answer, because it very much depends on the way a business handles the transactions in those currencies. Some companies use fairly sophisticated ways to manage foreign currencies. I suggest you read up on multi-currency accounting.
The main thing to do is to capture the data in the unit, value & date in which the business transaction is done without any conversion, or you risk losing something in translation.
For display & reporting, convert on demand, using either the original exchange rate, or any other exchange rate depending on the intent of the user.
Store & compute with values as the 'Decimal' (in C#) type - don't use float/double or you leave yourself vulnerable to rounding errors.
For instance, the way I did a multi currency app in a previous life was:
Every day, the exchange rates for the day would be set and this got stored in a database and cached for conversion in the application.
All transactions would be captured as value + currency + date (ie. no conversion)
Displaying the transaction in a users' currency was done on the fly. Make it clear this is not the transaction currency, but a display currency. This is similar to a credit card statement when you've gone on holiday. It shows the foreign transaction amount and then how much it ends up costing you in your native currency.
Our company deals with multiple currencies accounting and budgeting. The solution we implemented is quite straight-forward, and includes the following:
one currency table, with a few fields including numbers of decimals to be considered for the currency (yes, some currencies have to be managed with 3 decimals ...) and a exchange rate value, which has no other meaning than being an 'proposed/default exchange rate' when evaluating 'non-executed' or 'pending' financial transactions (see infra)
In this currency table, one of the records has an exchange rate of 1. This is the main/pivot currency in our system
All financial transactions, or all operations with a financial dimension (what we call commitments in our language), are either sorted as 'pending' or 'executed':
Pending transactions are for example invoices that are expected to be received for a certain amount at a certain date. In our budget follow-up system, these amounts are always reevaluated according to the 'proposed/default exchange rate' available in the currency table.
Executed transactions are always saved with the execution date, amount, currency AND exchange rate, which has to be confirmed/typed in when entering the execution data.
(I'm assuming you already know that you definitely shouldn't store currency data as float and why)
In my opinion, working with a single base currency might be easier; however, you should save the original amount, original currency, conversion rate, and base currency amount - otherwise your Accounting dept. might eat you alive, as they're likely to keep different currencies sort of separately.
Since exchange rates fluctuate, one approach is as you mentioned - store an "entered as is" amount that is not converted but display a companion field which is display only and shows the converted amount. In order to do the conversion, a table of exchange rates and their applicable date ranges would be required. If the size of this is small, caching on the client is an option. Otherwise, a remote call would be required in order to perform the conversion.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am sure there are many developers out here who have team spread across different time zones. What are some of the challenges people face and whats the best way to tackle them?
I currently work in a different time zone than the rest of my team. The biggest challenge is early in the morning and late in the afternoon when some of us haven't started work for the day or some have already left for the day.
It's just part of the work effort and we all respect each others valuable time. If it's something critical (and that's a relative term) then we just call the team member or page/text the whole team. If that happens then we all respond as needed. No big deal. Because of the respect factor, we know to only use this if necessary.
During the normal working day we just use the standard stuff like email, phone, and IM.
In all honesty, any company that has split development of a project across time zones is out of touch with the realities of engineering. The MBAs, in an attempt to save themselves a buck or two, foist upon their engineers the unenviable position of altering the schedules of their lives - leading to high stress, longer work hours, lower morale, and higher turnover. Quality suffers, ship dates suffer, feature lists suffer. The only increase you'll see in your project is the bug count.
You can have engineering projects split up like this if you don't have any need for low-latency communication between project units. In other words - if they are working on almost entirely independent segments of the system.
We have a team that is distributed or has to work across 3 or 4 different timezones. In the course of this we have faced several challenges primarily relating to communication.
Meetings are tough to arrange at a convenient time for all team members to attend, so there can sometimes be a need to have subsets of the team meeting, or to forego the team meeting approach for an individual update approach where one primary team member is responsible for a particular overseas team.
Another issue is work handover and communication. For example, we have a resource in India and if they have a problem that causes them to stop work, it can take 2 or 3 days out of their schedule if we don't respond quickly enough, all due to the time difference. Therefore, it is imperitive that we not only schedule diverse work to fill these delays but that we also respond to their queries in a timely manner. We often assign testing tasks to resources in this particular timezone as that is often an asynchronous task without an end.
In addition, you need to have a good change management system and code repository. The more asynchronous you can make the communication channels, the better, and this goes for information exchange as well (such as source and issue tracking).
There is no reason why you can't make distributed teams work, especially in the current age where we can work from almost anywhere as long as we have a link to the Internet. However, it's important to know where your bottlenecks are in a project and to ensure that work is distributed accordingly.
If you dont have a great process which has tuned exclusively for this kind of a scenario, then different time zone is going to kill your project deadline. Atleast one side should be very flexible to adjust their time for the meetings. But ofcourse that will eventually create frustrations among the team members.
Check out this SO thread which talks about Outsourcing and its practical issues, I think you will get some points from there too https://stackoverflow.com/questions/111948/outsourcing
Or Outsourcing Tag - https://stackoverflow.com/questions/tagged/outsourcing
We have these kinds of issues with support - we are using 3 commercial SDKs and the support teams are in distant time zones (8-10 hours difference). Moreover, not all work days overlap.
This fact had a big impact on my reverse engineering abilities :)
Plan, plan and plan some more. A few other things to note:
1) Be aware of local holidays if the team is in other locales, e.g. different countries may have different holidays. For example, some sects celebrate some Chrisian holidays 2 weeks later than most Christians,e.g. some orthodox sects I'm thinking.
2) Plan on meetings at a specific time that may be outside the normal work hours. This was particularly true when there was a 13 hour time difference between the rest of the team and myself.
3) Be aware of "Core hours" with the time change, e.g. if I'm on Pacific time and want to update something in New Jersey on Eastern time and I do it at 5pm Pacific time, that is 8 pm Eastern time and there may not be anyone there to notice the change or test it before the following morning which may mean some support person gets up at 4:30am Pacific time as in the East some have started to show up for work and go, "Huh? Why isn't this working like it did yesterday?"
There is also the other obvious things of being aware if there are various alphabets involved, e.g. Latin, Cyrillic, Arabic, etc. and this may affect how a computer interprets some text entered.