Lua: Piece of code to a specific circumstance - lua

I'm making a (poor) cryptography script in Lua and for this, I need to make a loop that will return a value for each number in a string, for example:
Input: 15, 18, 1, 20, 15, 18, 15, 5, 21, 1, 18, 15, 21, 16, 1, 4, 15, 18, 5, 9, 4, 5, 18, 15, 13, 1
And I want it to return each of these digits to a function which will do a certain math with them, then return the correspondent letter for each of the resulting numbers (15 will become 'o', 18 will become 'r' and so on)
Explaining in detail, I need the a piece of code to insert into a function that will:
Return each of the numbers in a string to a function.
After this, the function needs to convert the numbers into letters (as previously said).
Then a new function needs to insert the resulting letters in a new string.
Here's a brief example of how it needs to behave.
Input: 8, 5, 12, 12, 15
Result: 26, 7, 15, 15, 12 (These numbers aren't constant because of a hidden math made inside the function.)
Input: 26, 7, 15, 15, 12
Result: z, g, o, o, l
Input: z, g, o, o, l
Result: "zgool"
I think the source code of this project isn't necessary for this occasion, I'll just implement this code into the functions on the script. Please, someone (who understands what I meant) can help me?

local function my_poor_cryptography(s)
local codes = {}
-- string to numbers
for c in s:gmatch"%a" do
table.insert(codes, c:byte() - (c:find"%l" and 96 or 64))
end
-- math here (https://en.wikipedia.org/wiki/ROT13)
for j = 1, #codes do
codes[j] = (codes[j] + 12) % 26 + 1
end
-- numbers to string
s = s:gsub("%a",
function(c)
return c.char(table.remove(codes, 1) + (c:find"%l" and 96 or 64))
end)
return s
end
Usage:
local str = "Hello, World!"
str = my_poor_cryptography(str)
print(str) --> Uryyb, Jbeyq!
str = my_poor_cryptography(str)
print(str) --> Hello, World!

Related

Find minimum sum

Find the minimum sum of elements with one element from each row. I think the answer is
-214, but z3py returns unsat. What is wrong?
from z3 import Solver, Int, ForAll, Or
ARR = [
[36, 12, 90, 88, 82],
[-92, 50, 40, 31, 43],
[81, 28, -26, 8, -59],
[18, -99, -70, -33, 58],
[44, -33, 24, -92, -68],
]
s = Solver()
xs = [Int(f"x_{i}") for i, row in enumerate(ARR)]
ys = [Int(f"y_{i}") for i, row in enumerate(ARR)]
for x, y, row in zip(xs, ys, ARR):
s.add(Or(*[x == val for val in row]))
s.add(Or(*[y == val for val in row]))
s.add(ForAll(ys, sum(xs) <= sum(ys)))
print(s.check()) # unsat
Your encoding isn't quite correct. If you stick the following line in your program:
print(s.sexpr())
You'll see that it prints, amongst other things:
(assert (forall ((y_0 Int) (y_1 Int) (y_2 Int) (y_3 Int) (y_4 Int))
(<= (+ 0 x_0 x_1 x_2 x_3 x_4) (+ 0 y_0 y_1 y_2 y_3 y_4))))
And this is the reason why it is unsat. This is a quantified formula, and thus it says it is only satisfiable if the formula is true for all values y_0 .. y_4. This is obviously not true, and hence the unsat result.
Instead of this formulation, you should use z3's optimization engine. Pick one variable from each row, add them, and minimize that result. Something like this:
from z3 import *
ARR = [
[36, 12, 90, 88, 82],
[-92, 50, 40, 31, 43],
[81, 28, -26, 8, -59],
[18, -99, -70, -33, 58],
[44, -33, 24, -92, -68],
]
o = Optimize()
es = [Int(f"e_{i}") for i, row in enumerate(ARR)]
for e, row in zip (es, ARR):
o.add(Or(*[e == val for val in row]))
minTotal = Int("minTotal")
o.add(minTotal == sum(es))
o.minimize(minTotal)
print(o.check())
print(o.model())
When I run this, I get:
sat
[e_0 = 12,
e_3 = -99,
e_2 = -59,
e_1 = -92,
e_4 = -92,
minTotal = -330]
That is, solver picks 12 from the first row, -92 from the second, -59 from the third, -99 from the fourth, and -92 from the last row; for a minimum sum of -330.
It's easy to see that this is the correct solution since the solver picks minimum element from each row, and thus their sum will be minimal as well. (I'm not sure why you were expecting -214 to be the answer.)

lua:15: unexpected symbol near '['

I am trying to write a function to create a CNN model. I get the following error whenever I run the script:
lua:15: unexpected symbol near '['
require('torch')
require('nn')
function CeateNvidiaModel()
--The Nvidia model
--Input dimensions
local n_channels = 3
local height = 66
local width = 200
local nvidia_model = nn.Sequential();
--nvida_model:add(nn.Normalize()
--Convolutional Layers
nvidia_model:add(nn.SpatialConvolution(n_channels, 24, 5, 5, [2], [2]))
nvidia_model:add(nn.ELU(true))
nvidia_model:add(nn.SpatialConvolution(24, 36, 5, 5, [2], [2]))
nvidia_model:add(nn.ELU(true))
nvidia_model:add(nn.SpatialConvolution(36, 48, 5, 5, [2], [2]))
nvidia_model:add(nn.ELU(true))
nvidia_model:add(nn.SpatialConvolution(48, 64, 3, 3))
nvidia_model:add(nn.ELU(true))
nvidia_model:add(nn.SpatialConvolution(64, 64, 3, 3))
nvidia_model:add(nn.ELU(true))
-- Flatten Layer
nvidia_model:add(nn.Reshape(1164))
-- FC Layers
nvida_model:add(nn.Linear(1164, 100))
nvidia_model:add(nn.ELU(true))
nvida_model:add(nn.Linear(100, 50))
nvidia_model:add(nn.ELU(true))
nvida_model:add(nn.Linear(50, 10))
nvidia_model:add(nn.ELU(true))
nvida_model:add(nn.Linear(10, 1))
return nvida_model
end
I assume you are confusing [] and {}. In many other languages, you write array literals as [1, 2, 3], but in Lua [ and ] are only used for indexing; to declare an "array literal", you write {1, 2, 3} (because arrays in Lua are just tables).
The error message is a bit misleading; it says unexpected symbol near '[', but in reality the [ is the unexpected symbol.

Is there a Python function for checking the length of an array?

I need to count the number of items in an array. Is there a function to do it? I can do it with a for loop but if there was a function it would be 100 times easier.
arr1 = [10, 12, 87, 36, 11, 9, 73]
for each in arr1:
x += 1
print x
You can try using the length function:
len(arr1)

Best way to convert bit offset to an integer [duplicate]

I have a 64-bit unsigned integer with exactly 1 bit set. I’d like to assign a value to each of the possible 64 values (in this case, the odd primes, so 0x1 corresponds to 3, 0x2 corresponds to 5, …, 0x8000000000000000 corresponds to 313).
It seems like the best way would be to convert 1 → 0, 2 → 1, 4 → 2, 8 → 3, …, 263 → 63 and look up the values in an array. But even if that’s so, I’m not sure what the fastest way to get at the binary exponent is. And there may be more efficient ways, still.
This operation will be used 1014 to 1016 times, so performance is a serious issue.
Finally an optimal solution. See the end of this section for what to do when the input is guaranteed to have exactly one non-zero bit: http://graphics.stanford.edu/~seander/bithacks.html#IntegerLogDeBruijn
Here's the code:
static const int MultiplyDeBruijnBitPosition2[32] =
{
0, 1, 28, 2, 29, 14, 24, 3, 30, 22, 20, 15, 25, 17, 4, 8,
31, 27, 13, 23, 21, 19, 16, 7, 26, 12, 18, 6, 11, 5, 10, 9
};
r = MultiplyDeBruijnBitPosition2[(uint32_t)(v * 0x077CB531U) >> 27];
You may be able to adapt this to a direct multiplication-based algorithm for 64-bit inputs; otherwise, simply add one conditional to see if the bit is in the upper 32 positions or the lower 32 positions, then use the 32-bit algorithm here.
Update: Here's at least one 64-bit version I just developed myself, but it uses division (actually modulo).
r = Table[v%67];
For each power of 2, v%67 has a distinct value, so just put your odd primes (or bit indices if you don't want the odd-prime thing) at the right positions in the table. 3 positions (0, 17, and 34) are not used, which might be convenient if you also want to accept all-bits-zero as an input.
Update 2: 64-bit version.
r = Table[(uint64_t)(val * 0x022fdd63cc95386dull) >> 58];
This is my original work, but I got the B(2,6) De Bruijn sequence from this chess site so I can't take credit for anything but figuring out what a De Bruijn sequence is and using Google. ;-)
Some additional remarks on how this works:
The magic number is a B(2,6) De Bruijn sequence. It has the property that, if you look at a 6-consecutive-bit window, you can obtain any six-bit value in that window by rotating the number appropriately, and that each possible six-bit value is obtained by exactly one rotation.
We fix the window in question to be the top 6 bit positions, and choose a De Bruijn sequence with 0's in the top 6 bits. This makes it so we never have to deal with bit rotations, only shifts, since 0's will come into the bottom bits naturally (and we could never end up looking at more than 5 bits from the bottom in the top-6-bits window).
Now, the input value of this function is a power of 2. So multiplying the De Bruijn sequence by the input value performs a bitshift by log2(value) bits. We now have in the upper 6 bits a number which uniquely determines how many bits we shifted by, and can use that as an index into a table to get the actual length of the shift.
This same approach can be used for arbitrarily-large or arbitrarily-small integers, as long as you're willing to implement the multiplication. You simply have to find a B(2,k) De Bruijn sequence where k is the number of bits. The chess wiki link I provided above has De Bruijn sequences for values of k ranging from 1 to 6, and some quick Googling shows there are a few papers on optimal algorithms for generating them in the general case.
If performance is a serious issue, then you should use intrinsics/builtins to use CPU specific instructions, such as the ones found here for GCC:
http://gcc.gnu.org/onlinedocs/gcc-4.5.0/gcc/Other-Builtins.html
Built-in function int __builtin_ffs(unsigned int x).
Returns one plus the index of the least significant 1-bit of x, or if x is zero, returns zero.
Built-in function int __builtin_clz(unsigned int x).
Returns the number of leading 0-bits in x, starting at the most significant bit position. If x is 0, the result is undefined.
Built-in function int __builtin_ctz(unsigned int x).
Returns the number of trailing 0-bits in x, starting at the least significant bit position. If x is 0, the result is undefined.
Things like this are the core of many O(1) algorithms, such as kernel schedulers which need to find the first non-empty queue signified by an array of bits.
Note: I’ve listed the unsigned int versions, but GCC has unsigned long long versions, as well.
You could use a binary search technique:
int pos = 0;
if ((value & 0xffffffff) == 0) {
pos += 32;
value >>= 32;
}
if ((value & 0xffff) == 0) {
pos += 16;
value >>= 16;
}
if ((value & 0xff) == 0) {
pos += 8;
value >>= 8;
}
if ((value & 0xf) == 0) {
pos += 4;
value >>= 4;
}
if ((value & 0x3) == 0) {
pos += 2;
value >>= 2;
}
if ((value & 0x1) == 0) {
pos += 1;
}
This has the advantage over loops that the loop is already unrolled. However, if this is really performance critical, you will want to test and measure every proposed solution.
Some architectures (a suprising number, actually) have a single instruction that can do the calculation you want. On ARM it would be the CLZ (count leading zeroes) instruction. For intel, the BSF (bit-scan forward) or BSR (bit-scan reverse) instruction would help you out.
I guess this isn't really a C answer, but it will get you the speed you need!
precalculate 1 << i (for i = 0..63) and store them in an array
use a binary search to find the index into the array of a given value
look up the prime number in another array using this index
Compared to the other answer I posted here, this should only take 6 steps to find the index (as opposed to a maximum of 64). But it's not clear to me whether one step of this answer is not more time consuming than just bit shifting and incrementing a counter. You may want to try out both though.
See http://graphics.stanford.edu/~seander/bithacks.html - specifically "Finding integer log base 2 of an integer (aka the position of the highest bit set)" - for some alternative algorithsm. (If you're really serious about speed, you might consider ditching C if your CPU has a dedicated instruction).
Since speed, presumably not memory usage, is important, here's a crazy idea:
w1 = 1st 16 bits
w2 = 2nd 16 bits
w3 = 3rd 16 bits
w4 = 4th 16 bits
result = array1[w1] + array2[w2] + array3[w3] + array4[w4]
where array1..4 are sparsely populated 64K arrays that contain the actual prime values (and zero in the positions that don't correspond to bit positions)
#Rs solution is excellent this is just the 64 bit variant, with the table already calculated ...
static inline unsigned char bit_offset(unsigned long long self) {
static const unsigned char mapping[64] = {
[0]=0, [1]=1, [2]=2, [4]=3, [8]=4, [17]=5, [34]=6, [5]=7,
[11]=8, [23]=9, [47]=10, [31]=11, [63]=12, [62]=13, [61]=14, [59]=15,
[55]=16, [46]=17, [29]=18, [58]=19, [53]=20, [43]=21, [22]=22, [44]=23,
[24]=24, [49]=25, [35]=26, [7]=27, [15]=28, [30]=29, [60]=30, [57]=31,
[51]=32, [38]=33, [12]=34, [25]=35, [50]=36, [36]=37, [9]=38, [18]=39,
[37]=40, [10]=41, [21]=42, [42]=43, [20]=44, [41]=45, [19]=46, [39]=47,
[14]=48, [28]=49, [56]=50, [48]=51, [33]=52, [3]=53, [6]=54, [13]=55,
[27]=56, [54]=57, [45]=58, [26]=59, [52]=60, [40]=61, [16]=62, [32]=63
};
return mapping[((self & -self) * 0x022FDD63CC95386DULL) >> 58];
}
I built the table using the provided mask.
>>> ', '.join('[{0}]={1}'.format(((2**bit * 0x022fdd63cc95386d) % 2**64) >> 58, bit) for bit in xrange(64))
'[0]=0, [1]=1, [2]=2, [4]=3, [8]=4, [17]=5, [34]=6, [5]=7, [11]=8, [23]=9, [47]=10, [31]=11, [63]=12, [62]=13, [61]=14, [59]=15, [55]=16, [46]=17, [29]=18, [58]=19, [53]=20, [43]=21, [22]=22, [44]=23, [24]=24, [49]=25, [35]=26, [7]=27, [15]=28, [30]=29, [60]=30, [57]=31, [51]=32, [38]=33, [12]=34, [25]=35, [50]=36, [36]=37, [9]=38, [18]=39, [37]=40, [10]=41, [21]=42, [42]=43, [20]=44, [41]=45, [19]=46, [39]=47, [14]=48, [28]=49, [56]=50, [48]=51, [33]=52, [3]=53, [6]=54, [13]=55, [27]=56, [54]=57, [45]=58, [26]=59, [52]=60, [40]=61, [16]=62, [32]=63'
should the compiler complain:
>>> ', '.join(map(str, {((2**bit * 0x022fdd63cc95386d) % 2**64) >> 58: bit for bit in xrange(64)}.values()))
'0, 1, 2, 53, 3, 7, 54, 27, 4, 38, 41, 8, 34, 55, 48, 28, 62, 5, 39, 46, 44, 42, 22, 9, 24, 35, 59, 56, 49, 18, 29, 11, 63, 52, 6, 26, 37, 40, 33, 47, 61, 45, 43, 21, 23, 58, 17, 10, 51, 25, 36, 32, 60, 20, 57, 16, 50, 31, 19, 15, 30, 14, 13, 12'
^^^^ assumes that we iterate over sorted keys, this may not be the case in the future ...
unsigned char bit_offset(unsigned long long self) {
static const unsigned char table[64] = {
0, 1, 2, 53, 3, 7, 54, 27, 4, 38, 41, 8, 34, 55, 48,
28, 62, 5, 39, 46, 44, 42, 22, 9, 24, 35, 59, 56, 49,
18, 29, 11, 63, 52, 6, 26, 37, 40, 33, 47, 61, 45, 43,
21, 23, 58, 17, 10, 51, 25, 36, 32, 60, 20, 57, 16, 50,
31, 19, 15, 30, 14, 13, 12
};
return table[((self & -self) * 0x022FDD63CC95386DULL) >> 58];
}
simple test:
>>> table = {((2**bit * 0x022fdd63cc95386d) % 2**64) >> 58: bit for bit in xrange(64)}.values()
>>> assert all(i == table[(2**i * 0x022fdd63cc95386d % 2**64) >> 58] for i in xrange(64))
Short of using assembly or compiler-specific extensions to find the first/last bit that's set, the fastest algorithm is a binary search. First check if any of the first 32 bits are set. If so, check if any of the first 16 are set. If so, check if any of the first 8 are set. Etc. Your function to do this can directly return an odd prime at each leaf of the search, or it can return a bit index which you use as an array index into a table of odd primes.
Here's a loop implementation for the binary search, which the compiler could certainly unroll if that's deemed to be optimal:
uint32_t mask=0xffffffff;
int pos=0, shift=32, i;
for (i=6; i; i--) {
if (!(val&mask)) {
val>>=shift;
pos+=shift;
}
shift>>=1;
mask>>=shift;
}
val is assumed to be uint64_t, but to optimize this for 32-bit machines, you should special-case the first check, then perform the loop with a 32-bit val variable.
Call the GNU POSIX extension function ffsll, found in glibc. If the function isn't present, fall back on __builtin_ffsll. Both functions return the index + 1 of the first bit set, or zero. With Visual-C++, you can use _BitScanForward64.
unsigned bit_position = 0;
while ((value & 1) ==0)
{
++bit_position;
value >>= 1;
}
Then look up the primes based on bit_position as you say.
You may find that log(n) / log(2) gives you the 0, 1, 2, ... you're after in a reasonable timeframe. Otherwise, some form of hashtable based approach could be useful.
Another answer assuming IEEE float:
int get_bit_index(uint64_t val)
{
union { float f; uint32_t i; } u = { val };
return (u.i>>23)-127;
}
It works as specified for the input values you asked for (exactly 1 bit set) and also has useful behavior for other values (try to figure out exactly what that behavior is). No idea if it's fast or slow; that probably depends on your machine and compiler.
From the GnuChess source:
unsigned char leadz (BitBoard b)
/**************************************************************************
*
* Returns the leading bit in a bitboard. Leftmost bit is 0 and
* rightmost bit is 63. Thanks to Robert Hyatt for this algorithm.
*
***************************************************************************/
{
if (b >> 48) return lzArray[b >> 48];
if (b >> 32) return lzArray[b >> 32] + 16;
if (b >> 16) return lzArray[b >> 16] + 32;
return lzArray[b] + 48;
}
Here lzArray is a pregenerated array of size 2^16. This'll save you 50% of the operations compared to a full binary search.
This is for 32 bit, java, but it should be possible to adapt it to 64 bit.
It assume this will be the fastest cause there is no branching involved.
static public final int msb(int n) {
n |= n >>> 1;
n |= n >>> 2;
n |= n >>> 4;
n |= n >>> 8;
n |= n >>> 16;
n >>>= 1;
n += 1;
return n;
}
static public final int msb_index(int n) {
final int[] multiply_de_bruijn_bit_position = {
0, 1, 28, 2, 29, 14, 24, 3, 30, 22, 20, 15, 25, 17, 4, 8,
31, 27, 13, 23, 21, 19, 16, 7, 26, 12, 18, 6, 11, 5, 10, 9
};
return multiply_de_bruijn_bit_position[(msb(n) * 0x077CB531) >>> 27];
}
Here is more information from: http://graphics.stanford.edu/~seander/bithacks.html#ZerosOnRightMultLookup
// Count the consecutive zero bits (trailing) on the right with multiply and lookup
unsigned int v; // find the number of trailing zeros in 32-bit v
int r; // result goes here
static const int MultiplyDeBruijnBitPosition[32] =
{
0, 1, 28, 2, 29, 14, 24, 3, 30, 22, 20, 15, 25, 17, 4, 8,
31, 27, 13, 23, 21, 19, 16, 7, 26, 12, 18, 6, 11, 5, 10, 9
};
r = MultiplyDeBruijnBitPosition[((uint32_t)((v & -v) * 0x077CB531U)) >> 27];
// Converting bit vectors to indices of set bits is an example use for this.
// It requires one more operation than the earlier one involving modulus
// division, but the multiply may be faster. The expression (v & -v) extracts
// the least significant 1 bit from v. The constant 0x077CB531UL is a de Bruijn
// sequence, which produces a unique pattern of bits into the high 5 bits for
// each possible bit position that it is multiplied against. When there are no
// bits set, it returns 0. More information can be found by reading the paper
// Using de Bruijn Sequences to Index 1 in a Computer Word by
// Charles E. Leiserson, Harald Prokof, and Keith H. Randall.
and as last:
http://supertech.csail.mit.edu/papers/debruijn.pdf

How to convert Google spreadsheet's worksheet string id to integer index (GID)?

To export google spreadsheet's single worksheet to CSV, integer worksheet index(GID) is required to be passed.
https://spreadsheets.google.com/feeds/download/spreadsheets/Export?key=%s&gid=%d&exportFormat=csv
But, where are those informations? With gdata.spreadsheets.client, I could find some string id for worksheet like "oc6, ocv, odf".
client = gdata.spreadsheets.client.SpreadsheetsClient()
feed = client.GetWorksheets(spreadsheet, auth_token=auth_token)
And it returns below atom XML. (part of it)
<entry gd:etag=""URJFCB1NQSt7ImBoXhU."">
<id>https://spreadsheets.google.com/feeds/worksheets/0AvhN_YU3r5e9dGpTWGx3UVU3MTczaXJuNEFKQjMwN2c/ocw</id>
<updated>2012-06-21T08:19:46.587Z</updated>
<app:edited xmlns:app="http://www.w3.org/2007/app">2012-06-21T08:19:46.587Z</app:edited>
<category scheme="http://schemas.google.com/spreadsheets/2006" term="http://schemas.google.com/spreadsheets/2006#worksheet"/>
<title>AchievementType</title>
<content type="application/atom+xml;type=feed" src="https://spreadsheets.google.com/feeds/list/0AvhN_YU3r5e9dGpTWGx3UVU3MTczaXJuNEFKQjMwN2c/ocw/private/full"/>
<link rel="http://schemas.google.com/spreadsheets/2006#cellsfeed" type="application/atom+xml" href="https://spreadsheets.google.com/feeds/cells/0AvhN_YU3r5e9dGpTWGx3UVU3MTczaXJuNEFKQjMwN2c/ocw/private/full"/>
<link rel="http://schemas.google.com/visualization/2008#visualizationApi" type="application/atom+xml" href="https://spreadsheets.google.com/tq?key=0AvhN_YU3r5e9dGpTWGx3UVU3MTczaXJuNEFKQjMwN2c&sheet=ocw"/>
<link rel="self" type="application/atom+xml" href="https://spreadsheets.google.com/feeds/worksheets/0AvhN_YU3r5e9dGpTWGx3UVU3MTczaXJuNEFKQjMwN2c/private/full/ocw"/>
<link rel="edit" type="application/atom+xml" href="https://spreadsheets.google.com/feeds/worksheets/0AvhN_YU3r5e9dGpTWGx3UVU3MTczaXJuNEFKQjMwN2c/private/full/ocw"/>
<gs:rowCount>280</gs:rowCount>
<gs:colCount>28</gs:colCount>
</entry>
Also I tried with sheet parameter but failed with "Invalid Sheet" error.
https://spreadsheets.google.com/feeds/download/spreadsheets/Export?key=%s&sheet=XXX&exportFormat=csv
I guess there should be some magic function but could not find it. How can I convert them to integer id?? Or Can I export worksheet with string id?
EDIT: I just made convert table with python. DIRTY but working :-(
GID_TABLE = {
'od6': 0,
'od7': 1,
'od4': 2,
'od5': 3,
'oda': 4,
'odb': 5,
'od8': 6,
'od9': 7,
'ocy': 8,
'ocz': 9,
'ocw': 10,
'ocx': 11,
'od2': 12,
'od3': 13,
'od0': 14,
'od1': 15,
'ocq': 16,
'ocr': 17,
'oco': 18,
'ocp': 19,
'ocu': 20,
'ocv': 21,
'ocs': 22,
'oct': 23,
'oci': 24,
'ocj': 25,
'ocg': 26,
'och': 27,
'ocm': 28,
'ocn': 29,
'ock': 30,
'ocl': 31,
'oe2': 32,
'oe3': 33,
'oe0': 34,
'oe1': 35,
'oe6': 36,
'oe7': 37,
'oe4': 38,
'oe5': 39,
'odu': 40,
'odv': 41,
'ods': 42,
'odt': 43,
'ody': 44,
'odz': 45,
'odw': 46,
'odx': 47,
'odm': 48,
'odn': 49,
'odk': 50,
'odl': 51,
'odq': 52,
'odr': 53,
'odo': 54,
'odp': 55,
'ode': 56,
'odf': 57,
'odc': 58,
'odd': 59,
'odi': 60,
'odj': 61,
'odg': 62,
'odh': 63,
'obe': 64,
'obf': 65,
'obc': 66,
'obd': 67,
'obi': 68,
'obj': 69,
'obg': 70,
'obh': 71,
'ob6': 72,
'ob7': 73,
'ob4': 74,
'ob5': 75,
'oba': 76,
'obb': 77,
'ob8': 78,
'ob9': 79,
'oay': 80,
'oaz': 81,
'oaw': 82,
'oax': 83,
'ob2': 84,
'ob3': 85,
'ob0': 86,
'ob1': 87,
'oaq': 88,
'oar': 89,
'oao': 90,
'oap': 91,
'oau': 92,
'oav': 93,
'oas': 94,
'oat': 95,
'oca': 96,
'ocb': 97,
'oc8': 98,
'oc9': 99
}
I found your question looking for a solution to the same problem, and was surprised that those worksheet IDs actually correspond 1:1 to gids - I originally assumed they were assigned independently, instead of being an exercise in obfuscation.
I was able to find a slightly cleaner solution by reverse-engineering the formula they use to generate worksheet IDs from your table:
worksheetID = (gid xor 31578) encoded in base 36
So, some Python to go from a worksheet ID to gid:
def to_gid(worksheet_id):
return int(worksheet_id, 36) ^ 31578
This is still dirty, but will work for GIDs higher than 99 without requiring giant tables. At least as long as they don't change the generation logic (which they probably won't, as it would break existing IDs that people already use).
This code works with the new Google Sheets.
// Conversion of Worksheet Ids to GIDs and vice versa
// od4 > 2
function wid_to_gid(wid) {
var widval = wid.length > 3 ? wid.substring(1) : wid;
var xorval = wid.length > 3 ? 474 : 31578;
return parseInt(String(widval), 36) ^ xorval;
}
// 2 > od4
function gid_to_wid(gid) {
var xorval = gid > 31578 ? 474 : 31578;
var letter = gid > 31578 ? 'o' : '';
return letter + parseInt((gid ^ xorval)).toString(36);
}
I cannot add a comment to Wasilewski's post because apparently I lack reputation so here are the two conversion functions in Javascript based on Wasilewski's answer:
// Conversion of Worksheet Ids to GIDs and vice versa
// od4 > 2
function wid_to_gid(wid) {
return parseInt(String(wid),36)^31578
}
// 2> 0d4
function gid_to_wid(gid) {
// (gid xor 31578) encoded in base 36
return parseInt((gid^31578)).toString(36);
}
This is a Java adaptation of Buho's code which works with both the new Google Sheets and with the legacy Google Spreadsheets.
// "od4" to 2 (legacy style)
// "ogtw0h0" to 1017661118 (new style)
public static int widToGid(String worksheetId) {
boolean idIsNewStyle = worksheetId.length() > 3;
// if the id is in the new style, first strip the first character before converting
worksheetId = idIsNewStyle ? worksheetId.substring(1) : worksheetId;
// determine the integer to use for bitwise XOR
int xorValue = idIsNewStyle ? 474 : 31578;
// convert to gid
return Integer.parseInt(worksheetId, 36) ^ xorValue;
}
// Convert 2 to "od4" (legacy style)
// Convert 1017661118 to "ogtw0h0" (new style)
public static String gidToWid(int gid) {
boolean idIsNewStyle = gid > 31578;
// determine the integer to use for bitwise XOR
int xorValue = idIsNewStyle ? 474 : 31578;
// convert to worksheet id, prepending 'o' if it is the new style.
return
idIsNewStyle ?
'o' + Integer.toString((worksheetIndex ^ xorValue), 36):
Integer.toString((worksheetIndex ^ xorValue), 36);
}
This is a Clojure adaptation of Buho's and Julie's code which should work with both the new Google Sheets and with the legacy Google Spreadsheets.
(defn wid->gid [wid]
(let [new-wid? (> (.length wid) 3)
wid (if new-wid? (.substring wid 1) wid)
xor-val (if new-wid? 474 31578)]
(bit-xor (Integer/parseInt wid 36) xor-val)))
(defn gid->wid [gid]
(let [new-gid? (> gid 31578)
xor-val (if new-gid? 474 31578)
letter (if new-gid? "o" "")]
(str letter (Integer/toString (bit-xor gid xor-val) 36))))
If you're using Python with gspread, here's what you do:
wid = worksheet.id
widval = wid[1:] if len(wid) > 3 else wid
xorval = 474 if len(wid) > 3 else 31578
gid = int(str(widval), 36) ^ xorval
I'll probably open a PR for this.

Resources