What is checksum and where is used? - checksum

I have a project with topic what is checksum, I would like to explain where to be focused on, and to make more attractive from others colleagues.
What checksum?
Where is used and how can I explain it with a short term?

Checksum is a digit representing the sum of the correct digits in a piece of stored or transmitted digital data, against which later comparisons can be made to detect errors in the data.
You can refer below links for more detail:
1. http://www.online-tech-tips.com/cool-websites/what-is-checksum/
http://www.thegeekstuff.com/2012/05/ip-header-checksum
Hope it will help you.

Related

is INR to USD conversion, Transformation or Translation?? In Websphere message broker?

Presently I'm using IBM Integration Bus v9.0 and I have a doubt.
is INR to USD conversion, Transformation or Translation? please give me the answer.
Thanks in advance.
That's not really a technical question. It sounds as if there is something behind the question; maybe a contractual dispute? Please define your terms ( what do you mean by translation? ) and explain why you need to know.
Its translation, keep this in mind
Perform transformation on a message to make its structure comply with the receiving system’s requirements.
Perform data translation on a message so that its data is represented according to the receiving system’s conventions.
Simple translation might be required if the two systems use different values to represent the same information for a given field.
Complex translation might involve augmenting or replacing groups of fields with a completely different structure and encoding.
Money has a fixed value whether its in INR or USD, there's no structure to it but value with a unit, the unit is interpreted differently by different system, so its translation

Maestro Credit Card: Pulling information from MSR dump (Any language)

We have a system that allows you to scan your credit card on a MSR and from the dump I pull the needed fields such as name/cc/exp. Recently we had to add globalized credit cards to this. For almost all of the card provided, I was able to still pull the information since they seemed to all follow a standard. One exception however was a Maestro card. The format is completely different, and since I neither have one to verify actual number on card vs dumped data, nore have access to any other dumps, it's very hard for me to figure out the correct format of these. I also did some google searching with little luck of extracting data from a MSR dump.
Unlike almost all other cards, track one does not start with "%B" and Track two does not start with ";". Both tracks do appear to end with "?" (based off analyzing the whole dump, not by track). Track 3 does appear to be empty, which is normal.
The whole dump seems to lack any name data and is basically in the format of:
###=###?
###=###=###==#=###?
Note that besides the single #, where I had 3 it was variable length.
Again I only had access to one single dump, which for obvious reason I cannot post here.
If anyone has some example code in any language, or can link me to some help, I'd really appreciate it.
Thanks in advance,
Anthony
Is it possible that the card you are testing is faulty or simply a non standard card that is generally not supported? try to check track data from other maestro cards before assuming your system is at fault.
I say this because ISO 7813, the governing standard for transaction cards is pretty clear regarding the fact that track 2 data begins with start sentinel ";" and that all valid bank cards have a format code "B" following the start sentinel "%" in track 1.
check the standard carefully and make sure your system is parsing correctly:
http://www.gae.ucm.es/~padilla/extrawork/tracks.html

Profanity checking for promotional codes

I have a slightly unusual profanity-related question.
Now we're used to dealing with profanity-filtering of user-generated content — any method is imperfect, but products like CleanSpeak and WebPurify do a good-enough job.
The problem we have at the moment, though, is that we've been building an engine to run promotional-code–based competitions, that will be used internationally. We could do with checking that none of these codes is profane in Latin American Spanish or Malay (at least in the first instance), to make sure we don't send out a code that's equivalent to FUCK23 or PEN15 or something.
We've tried Googling around and asking people we know, but we can't find an easy way of getting hold of an es-419 or an ms profanity list to filter the codes against. As there are literally millions of codes per locale, we'd rather do an offline check than hit an API for each code (which would be expensive both in terms of bandwidth and usage fees).
I know this is a bit of a long shot, but does anyone know of a good source for profanity lists in different languages?
#disclaim: We know that no profanity filtering is perfect, that it's essentially futile with user-generated content and we have read SO #273516: How do you implement a good profanity filter? — that's not what we're asking.
Building or finding lists in other languages is extremely time consuming and difficult (trust me, we've built many of them at Inversoft). You might be better off tweaking the code generators instead (from what I could tell your code is generating the promotional codes rather than humans).
The best way to tweak a generator is to ensure that the codes can't easily form words based on the general use of consonants and vowels in most European languages. Things get a bit dicey in Polish and others, but it usually works.
Generally, most codes that start with a vowel are followed by another vowel or a non-joining consonant (like 'q' without a 'u'). If the code starts with a consonant then the next character is the same consonant or one that has a low probability of being used. For example, if you start with 's' then adding 'g' is a good choice.
You could also use wiktionary or other similar sources (like Linux dictionary files) to build a statistical approach to this. By extracting the probability of characters being next to each other, you should be able to generate codes with good accuracy of never being words in any language.
However, if I misread your question and you aren't generating the codes programmatically, you can ignore my response completely. :)
I have had the same thoughts. in trying to generate 6 character codes for a project i am doing.
I decided to reduce the likelyhood of obvious porfain codes So i removed the vowels that i found in as many "bad" words as i could think of, from my intial base 36 generation code. Leaving me with something more like a base 28 system that did not include a,e,i,o,u, 1,0. the one and zero were removed to reduce confusion between those characters in some fonts with I,L,O's
so far I have not seen a "profain" code genreated. Although base 28 has 1.something billion unique combinations.
i cannot vouch for other languages, and had not even considered it...

What is the HL7 ZDS segment used for?

My very old HL7 parser has just hit a snag as it is now getting some messages with a ZDS segment present. It was easy to fix by adding a ZDS object to my parser, but I am trying to find out what it is used for. Googling hasn't helped much. This is a sample
ZDS|PERFORM|p0001236^PATEL^ATEST^^^^^^HHB_INOP_PRSNL^^^^OTHER|20100714101800|CD:653
ZDS|TRANSCRIBE|p0001236^PATEL^ATEST^^^^^^HHB_INOP_PRSNL^^^^OTHER|20100714101800|CD:653
ZDS|SIGN|p0001236^PATEL^ATEST^^^^^^HHB_INOP_PRSNL^^^^OTHER|20100714101912|CD:653
So, I'm interested in what each field is though looking at this sample data, it seems I don't lose much by just dropping the whole segment.
In HL7, all segments that begin with the letter Z are considered to be custom and are not defined further by the HL7 standard. You will need to find out what system is responsible for generating these ZDS segments and ask the owners of that system to provide you their specification.
As Scott said, "Z" segments are custom and can vary from vendor to vendor. In the Cerner realm, however, ZDS segments are typically used for "Document Succession" purposes -- a means of document version tracking and synchronization between two supportive systems.
The ZDS segment is used to communicate document endorsement information (actions done or to be done) in Unsolicited Document Results. only a specific solution of Millennium use it, so if you don't need just ignore it.

ISO Country/Currency data

All,
Our application requires data on ISO countries and currencies (where the data must be up to date). We did purchase country/currency data from ISO themselves, however we still needed to perform alot of manual manipulation of the data, as well as write our own tool to read and process the data into our database.
Are we going about getting this data the wrong way?
What is the norm in relation to the acquisition of country/currency data?
Is there any well known providers out there that are offer to provide this data as a service or through some other medium in a usable format?
Any help will be greatly appreciated.
The .NET CultureInfo class provides formatting for currencies (as well as dates, times, numbers, etc). I would never have even considered buying the data from ISO when it's available for free in the .NET runtime.
You might be interested by IBM's International Component for Unicode (ICU) library.
Open source, well known, supports "numbers, dates, times and currency amounts" formatting.
Not sure if it helps your case, but this info might be useful for somebody else... :-)

Resources