How do I find an error in the while loop? - dart

I need help. I wrote the code and checked it a hundred times. But I can't find the error. All code before the while loop works without errors. Error in the loop itself. When you run it, you get an infinite loop.
I would be grateful if you could tell me where I made the mistake and why it turns out to be an infinite loop.
class Semis {
int i;
double k;
Semis(this.i, this.k);
}
void main() {
var p = [
0, 1, 5, 8, 9, 10, 17, 17, 20, 24, // 0X's
30, 32, 35, 39, 43, 43, 45, 49, 50, 54, // 1X's
57, 60, 65, 68, 70, 74, 80, 81, 84, 85, // 2X's
87, 91, 95, 99, 101, 104, 107, 112, 115, 116, // 3X's
119, 121, 125, 129, 131, 134, 135, 140, 143, 145, // 4X's
151
];
Function cutLog = (List p, int n) {
// Some array to store calculated values
num sum = 0;
int iter = 0;
int stock = n;
List<Semis> pL = [];
var map = Map.fromIterable(p,
key: (index) => p.indexOf(index),
value: (item) => item / (p.indexOf(item) > 0 ? p.indexOf(item) : 1));
var sortedMap = Map.fromEntries(
map.entries.toList()..sort((e1, e2) => e2.value.compareTo(e1.value)));
sortedMap.forEach(
(i, k) => pL.isEmpty || pL.last.i > i ? pL.add(Semis(i, k)) : null);
while (stock > 0) {
if ((stock - pL[iter].i) > 0) {
sum = sum + p[pL[iter].i];
stock = stock - pL[iter].i;
} else
iter++;
}
return sum; // Good luck intern!
};
print(cutLog(p, 5));
}

You get an infinite loop because the condition of the loop never fails.
The condition is that stock > 0. However, what you do in the loop is:
If stock minus some value is >0, you decrement stock. Therefore stock remains higher than 0
Else you increment the iterator.
You never actually allow stock to be decremented enough so that it becomes 0. I think your if comparison should be made using >= 0, if that seems logical for your algorithm. If not, then you probably need to rework it more.

If you look at the list of Semis you create, its last element is Semis(0, 0.0).
That means that your loop, will eventually reach this, and then
if ((stock - pL[iter].i) > 0) {
sum = sum + p[pL[iter].i];
stock = stock - pL[iter].i;
will do nothing because pL[iter].i is zero.
You probably need to bail out of the loop at this point.
Your loop will, as Lyubomir Vasilev says, never have a false condition because stock never reaches zero. Your iter is increment, and if it hadn't been for the Semis(0, _) entry, you would eventually have run past the end of the pL array and gotten an index-out-of-range error. With that zero-value in the loop, you will run forever.

Related

How to convert a byte array to a double (float value) in Dart

Based on this question, I have a list of bytes in Dart:
final data = [123, 34, 106, 115, 111, 110, 114, 112, 99, 34, 58, 34, 50, 46, 48, 34, 44, 34, 109, 101, 116, 104, 111, 100, 34, 58, 34, 115];
It represents a Float32 value from TensorFlow Lite. How do convert that to a readable double value?
Your data contains 28 bytes, and since it represents 32-bit float values (each of which are 4 bytes), that's equivalent to seven 32-bit values.
final data = [
123, 34, 106, 115, // 1
111, 110, 114, 112, // 2
99, 34, 58, 34, // 3
50, 46, 48, 34, // 4
44, 34, 109, 101, // 5
116, 104, 111, 100, // 6
34, 58, 34, 115, // 7
];
The ByteData class from the dart:typed_data library gives you different views on byte data. What you want is the Float32 view, which will give you a double in Dart.
final bytes = Uint8List.fromList(data);
final byteData = ByteData.sublistView(bytes);
double value = byteData.getFloat32(0);
print(value); // 8.433111377393183e+35
The 0 in getFloat32(0) means that you want to get the 32-bit float value starting at index 0 in the byte array. That only gives you the first value of the seven, though, so if you want the rest you need to loop though the entire byte list:
for (var i = 0; i < data.length; i += 4) {
print(byteData.getFloat32(i));
}
On every loop you increase the index by 4 to get the beginning of the next 32-bit float value. Run that loop and you'll see the results for all seven values:
8.433111377393183e+35
7.3795778785962275e+28
2.9925614505443553e+21
1.0139077133430874e-8
2.308231080230816e-12
7.366162972792583e+31
2.522593777484324e-18
See also
Working with bytes in Dart

Thread 1: Fatal error: UnsafeMutablePointer.initialize overlapping range

I am trying to use
UnsafeMutablePointer<UnsafeMutablePointer<Float>?>!
as it is required for a parameter for a method i need to use. Yet I have no idea what this is or how to use it.
I created this value by doing this :
var bytes2: [Float] = [39, 77, 111, 111, 102, 33, 39, 0]
let uint8Pointer2 = UnsafeMutablePointer<Float>.allocate(capacity: 8)
uint8Pointer2.initialize(from: &bytes2, count: 8)
var bytes: [Float] = [391, 771, 1111, 1111, 1012, 331, 319, 10]
var uint8Pointer = UnsafeMutablePointer<Float>?.init(uint8Pointer2)
uint8Pointer?.initialize(from: &bytes, count: 8)
let uint8Pointer1 = UnsafeMutablePointer<UnsafeMutablePointer<Float>?>!.init(&uint8Pointer)
uint8Pointer1?.initialize(from: &uint8Pointer, count: 8)
But I get the error :
Thread 1: Fatal error: UnsafeMutablePointer.initialize overlapping range
What am I doing wrong?
You are creating bad behaviour..
var bytes2: [Float] = [39, 77, 111, 111, 102, 33, 39, 0]
let uint8Pointer2 = UnsafeMutablePointer<Float>.allocate(capacity: 8)
uint8Pointer2.initialize(from: &bytes2, count: 8)
Creates a pointer to some memory and initializes that memory to the values stored in bytes2..
So: uint8Pointer2 = [39, 77, 111, 111, 102, 33, 39, 0]
Then you decided to create a pointer that references that pointer's memory:
var uint8Pointer = UnsafeMutablePointer<Float>?.init(uint8Pointer2)
So if you printed uint8Pointer, it would have the EXACT same values as uint8Pointer2.. If you decided to change any of its values as well, it'd also change the values of uint8Pointer2..
So when you do:
var bytes: [Float] = [391, 771, 1111, 1111, 1012, 331, 319, 10]
uint8Pointer?.initialize(from: &bytes, count: 8)
It overwrote the values of uint8Pointer2 with [391, 771, 1111, 1111, 1012, 331, 319, 10]..
So far, uint8Pointer is just a shallow copy of uint8Pointer2.. Changing one affects the other..
Now you decided to do:
let uint8Pointer1 = UnsafeMutablePointer<UnsafeMutablePointer<Float>?>!.init(&uint8Pointer)
uint8Pointer1?.initialize(from: &uint8Pointer, count: 8)
Here you created a pointer (uint8Pointer1) to uint8Pointer and you said uint8Pointer1 initialize with uint8Pointer.. but you're initializing a pointer with a pointer to itself and a count of 8..
First of all, don't bother calling initialize on a pointer to pointer with a value of itself.. It's already pointing to the correct values..
What's nice is that:
uint8Pointer1?.initialize(from: &uint8Pointer, count: 1)
//Same as: memcpy(uint8Pointer1, &uint8Pointer, sizeof(uint8Pointer)`
//However, they both point to the same memory address..
will crash, but:
uint8Pointer1?.initialize(from: &uint8Pointer)
//Same as: `uint8Pointer1 = uint8Pointer`.. Note: Just a re-assignment.
won't.. because it doesn't do a memcpy for the latter.. whereas the former does.
Hopefully I explained it correctly..
P.S. Name your variables properly!
Translation for the C++ people:
//Initial pointer to array..
float bytes2[] = {39, 77, 111, 111, 102, 33, 39, 0};
float* uint8Pointer2 = &bytes[2];
memcpy(uint8Pointer2, &bytes2[0], bytes2.size() * sizeof(float));
//Shallow/Shadowing Pointer...
float* uint8Pointer = uint8Pointer2;
float bytes[] = {391, 771, 1111, 1111, 1012, 331, 319, 10};
memcpy(uint8Pointer, &bytes[0], bytes.size() * sizeof(float));
//Pointer to pointer..
float** uint8Pointer1 = &uint8Pointer;
//Bad.. uint8Pointer1 and &uint8Pointer is the same damn thing (same memory address)..
//See the line above (float** uint8Pointer1 = &uint8Pointer)..
memcpy(uint8Pointer1, &uint8Pointer, 8 * sizeof(uint8Pointer));
//The memcpy is unnecessary because it already pointers to the same location.. plus it's also wrong lol.

Result sorting example Swift not showing up in Playground

I'm trying to work out an example which is provided on developer.apple.com website 'Swift Playground'. I had to adapt it a little since swift 2 is, what looks like, different in handling 'inout' variables. In the example showed on the presentation 'inout' was not used in the function declaration. Anyway the result of 'data' is not showing in the Playground although the code doesn't show any compile errors.
import UIKit
var data = [16, 97, 13, 55, 95, 53, 18, 10, 79, 53, 79, 34, 50, 34, 0, 91, 94, 55, 6, 38, 7]
func exchange<T>(inout data:[T], i: Int, j: Int) {
let temp = data[i]
data[i] = data[j]
data[j] = temp
}
func swapLeft<T: Comparable>(inout data: [T], index: Int) {
for i in reverse(1...index) {
if data[i] < data[i-1] {
exchange(&data, i, i-1)
}else {
break
}
}
}
func isort<T: Comparable>(inout data: [T]) {
for i in 1...data.count {
swapLeft(&data,i)
}
}
data //result [16, 97, 13, 55, 95, 53, 18, 10, 79, 53, 79, 34, 50, 34, 0, 91, 94, 55, 6, 38, 7]
isort(&data)
data //no result shown
Screenshot
It isn't showing a result because it is crashing with Array index out of range. Try this for isort:
func isort<T: Comparable>(inout data: [T]) {
for i in 1..<data.count {
swapLeft(&data,i)
}
}
You can always use println(data)
Make sure your Debug Area is shown. See this post for more info.

Best way to convert bit offset to an integer [duplicate]

I have a 64-bit unsigned integer with exactly 1 bit set. I’d like to assign a value to each of the possible 64 values (in this case, the odd primes, so 0x1 corresponds to 3, 0x2 corresponds to 5, …, 0x8000000000000000 corresponds to 313).
It seems like the best way would be to convert 1 → 0, 2 → 1, 4 → 2, 8 → 3, …, 263 → 63 and look up the values in an array. But even if that’s so, I’m not sure what the fastest way to get at the binary exponent is. And there may be more efficient ways, still.
This operation will be used 1014 to 1016 times, so performance is a serious issue.
Finally an optimal solution. See the end of this section for what to do when the input is guaranteed to have exactly one non-zero bit: http://graphics.stanford.edu/~seander/bithacks.html#IntegerLogDeBruijn
Here's the code:
static const int MultiplyDeBruijnBitPosition2[32] =
{
0, 1, 28, 2, 29, 14, 24, 3, 30, 22, 20, 15, 25, 17, 4, 8,
31, 27, 13, 23, 21, 19, 16, 7, 26, 12, 18, 6, 11, 5, 10, 9
};
r = MultiplyDeBruijnBitPosition2[(uint32_t)(v * 0x077CB531U) >> 27];
You may be able to adapt this to a direct multiplication-based algorithm for 64-bit inputs; otherwise, simply add one conditional to see if the bit is in the upper 32 positions or the lower 32 positions, then use the 32-bit algorithm here.
Update: Here's at least one 64-bit version I just developed myself, but it uses division (actually modulo).
r = Table[v%67];
For each power of 2, v%67 has a distinct value, so just put your odd primes (or bit indices if you don't want the odd-prime thing) at the right positions in the table. 3 positions (0, 17, and 34) are not used, which might be convenient if you also want to accept all-bits-zero as an input.
Update 2: 64-bit version.
r = Table[(uint64_t)(val * 0x022fdd63cc95386dull) >> 58];
This is my original work, but I got the B(2,6) De Bruijn sequence from this chess site so I can't take credit for anything but figuring out what a De Bruijn sequence is and using Google. ;-)
Some additional remarks on how this works:
The magic number is a B(2,6) De Bruijn sequence. It has the property that, if you look at a 6-consecutive-bit window, you can obtain any six-bit value in that window by rotating the number appropriately, and that each possible six-bit value is obtained by exactly one rotation.
We fix the window in question to be the top 6 bit positions, and choose a De Bruijn sequence with 0's in the top 6 bits. This makes it so we never have to deal with bit rotations, only shifts, since 0's will come into the bottom bits naturally (and we could never end up looking at more than 5 bits from the bottom in the top-6-bits window).
Now, the input value of this function is a power of 2. So multiplying the De Bruijn sequence by the input value performs a bitshift by log2(value) bits. We now have in the upper 6 bits a number which uniquely determines how many bits we shifted by, and can use that as an index into a table to get the actual length of the shift.
This same approach can be used for arbitrarily-large or arbitrarily-small integers, as long as you're willing to implement the multiplication. You simply have to find a B(2,k) De Bruijn sequence where k is the number of bits. The chess wiki link I provided above has De Bruijn sequences for values of k ranging from 1 to 6, and some quick Googling shows there are a few papers on optimal algorithms for generating them in the general case.
If performance is a serious issue, then you should use intrinsics/builtins to use CPU specific instructions, such as the ones found here for GCC:
http://gcc.gnu.org/onlinedocs/gcc-4.5.0/gcc/Other-Builtins.html
Built-in function int __builtin_ffs(unsigned int x).
Returns one plus the index of the least significant 1-bit of x, or if x is zero, returns zero.
Built-in function int __builtin_clz(unsigned int x).
Returns the number of leading 0-bits in x, starting at the most significant bit position. If x is 0, the result is undefined.
Built-in function int __builtin_ctz(unsigned int x).
Returns the number of trailing 0-bits in x, starting at the least significant bit position. If x is 0, the result is undefined.
Things like this are the core of many O(1) algorithms, such as kernel schedulers which need to find the first non-empty queue signified by an array of bits.
Note: I’ve listed the unsigned int versions, but GCC has unsigned long long versions, as well.
You could use a binary search technique:
int pos = 0;
if ((value & 0xffffffff) == 0) {
pos += 32;
value >>= 32;
}
if ((value & 0xffff) == 0) {
pos += 16;
value >>= 16;
}
if ((value & 0xff) == 0) {
pos += 8;
value >>= 8;
}
if ((value & 0xf) == 0) {
pos += 4;
value >>= 4;
}
if ((value & 0x3) == 0) {
pos += 2;
value >>= 2;
}
if ((value & 0x1) == 0) {
pos += 1;
}
This has the advantage over loops that the loop is already unrolled. However, if this is really performance critical, you will want to test and measure every proposed solution.
Some architectures (a suprising number, actually) have a single instruction that can do the calculation you want. On ARM it would be the CLZ (count leading zeroes) instruction. For intel, the BSF (bit-scan forward) or BSR (bit-scan reverse) instruction would help you out.
I guess this isn't really a C answer, but it will get you the speed you need!
precalculate 1 << i (for i = 0..63) and store them in an array
use a binary search to find the index into the array of a given value
look up the prime number in another array using this index
Compared to the other answer I posted here, this should only take 6 steps to find the index (as opposed to a maximum of 64). But it's not clear to me whether one step of this answer is not more time consuming than just bit shifting and incrementing a counter. You may want to try out both though.
See http://graphics.stanford.edu/~seander/bithacks.html - specifically "Finding integer log base 2 of an integer (aka the position of the highest bit set)" - for some alternative algorithsm. (If you're really serious about speed, you might consider ditching C if your CPU has a dedicated instruction).
Since speed, presumably not memory usage, is important, here's a crazy idea:
w1 = 1st 16 bits
w2 = 2nd 16 bits
w3 = 3rd 16 bits
w4 = 4th 16 bits
result = array1[w1] + array2[w2] + array3[w3] + array4[w4]
where array1..4 are sparsely populated 64K arrays that contain the actual prime values (and zero in the positions that don't correspond to bit positions)
#Rs solution is excellent this is just the 64 bit variant, with the table already calculated ...
static inline unsigned char bit_offset(unsigned long long self) {
static const unsigned char mapping[64] = {
[0]=0, [1]=1, [2]=2, [4]=3, [8]=4, [17]=5, [34]=6, [5]=7,
[11]=8, [23]=9, [47]=10, [31]=11, [63]=12, [62]=13, [61]=14, [59]=15,
[55]=16, [46]=17, [29]=18, [58]=19, [53]=20, [43]=21, [22]=22, [44]=23,
[24]=24, [49]=25, [35]=26, [7]=27, [15]=28, [30]=29, [60]=30, [57]=31,
[51]=32, [38]=33, [12]=34, [25]=35, [50]=36, [36]=37, [9]=38, [18]=39,
[37]=40, [10]=41, [21]=42, [42]=43, [20]=44, [41]=45, [19]=46, [39]=47,
[14]=48, [28]=49, [56]=50, [48]=51, [33]=52, [3]=53, [6]=54, [13]=55,
[27]=56, [54]=57, [45]=58, [26]=59, [52]=60, [40]=61, [16]=62, [32]=63
};
return mapping[((self & -self) * 0x022FDD63CC95386DULL) >> 58];
}
I built the table using the provided mask.
>>> ', '.join('[{0}]={1}'.format(((2**bit * 0x022fdd63cc95386d) % 2**64) >> 58, bit) for bit in xrange(64))
'[0]=0, [1]=1, [2]=2, [4]=3, [8]=4, [17]=5, [34]=6, [5]=7, [11]=8, [23]=9, [47]=10, [31]=11, [63]=12, [62]=13, [61]=14, [59]=15, [55]=16, [46]=17, [29]=18, [58]=19, [53]=20, [43]=21, [22]=22, [44]=23, [24]=24, [49]=25, [35]=26, [7]=27, [15]=28, [30]=29, [60]=30, [57]=31, [51]=32, [38]=33, [12]=34, [25]=35, [50]=36, [36]=37, [9]=38, [18]=39, [37]=40, [10]=41, [21]=42, [42]=43, [20]=44, [41]=45, [19]=46, [39]=47, [14]=48, [28]=49, [56]=50, [48]=51, [33]=52, [3]=53, [6]=54, [13]=55, [27]=56, [54]=57, [45]=58, [26]=59, [52]=60, [40]=61, [16]=62, [32]=63'
should the compiler complain:
>>> ', '.join(map(str, {((2**bit * 0x022fdd63cc95386d) % 2**64) >> 58: bit for bit in xrange(64)}.values()))
'0, 1, 2, 53, 3, 7, 54, 27, 4, 38, 41, 8, 34, 55, 48, 28, 62, 5, 39, 46, 44, 42, 22, 9, 24, 35, 59, 56, 49, 18, 29, 11, 63, 52, 6, 26, 37, 40, 33, 47, 61, 45, 43, 21, 23, 58, 17, 10, 51, 25, 36, 32, 60, 20, 57, 16, 50, 31, 19, 15, 30, 14, 13, 12'
^^^^ assumes that we iterate over sorted keys, this may not be the case in the future ...
unsigned char bit_offset(unsigned long long self) {
static const unsigned char table[64] = {
0, 1, 2, 53, 3, 7, 54, 27, 4, 38, 41, 8, 34, 55, 48,
28, 62, 5, 39, 46, 44, 42, 22, 9, 24, 35, 59, 56, 49,
18, 29, 11, 63, 52, 6, 26, 37, 40, 33, 47, 61, 45, 43,
21, 23, 58, 17, 10, 51, 25, 36, 32, 60, 20, 57, 16, 50,
31, 19, 15, 30, 14, 13, 12
};
return table[((self & -self) * 0x022FDD63CC95386DULL) >> 58];
}
simple test:
>>> table = {((2**bit * 0x022fdd63cc95386d) % 2**64) >> 58: bit for bit in xrange(64)}.values()
>>> assert all(i == table[(2**i * 0x022fdd63cc95386d % 2**64) >> 58] for i in xrange(64))
Short of using assembly or compiler-specific extensions to find the first/last bit that's set, the fastest algorithm is a binary search. First check if any of the first 32 bits are set. If so, check if any of the first 16 are set. If so, check if any of the first 8 are set. Etc. Your function to do this can directly return an odd prime at each leaf of the search, or it can return a bit index which you use as an array index into a table of odd primes.
Here's a loop implementation for the binary search, which the compiler could certainly unroll if that's deemed to be optimal:
uint32_t mask=0xffffffff;
int pos=0, shift=32, i;
for (i=6; i; i--) {
if (!(val&mask)) {
val>>=shift;
pos+=shift;
}
shift>>=1;
mask>>=shift;
}
val is assumed to be uint64_t, but to optimize this for 32-bit machines, you should special-case the first check, then perform the loop with a 32-bit val variable.
Call the GNU POSIX extension function ffsll, found in glibc. If the function isn't present, fall back on __builtin_ffsll. Both functions return the index + 1 of the first bit set, or zero. With Visual-C++, you can use _BitScanForward64.
unsigned bit_position = 0;
while ((value & 1) ==0)
{
++bit_position;
value >>= 1;
}
Then look up the primes based on bit_position as you say.
You may find that log(n) / log(2) gives you the 0, 1, 2, ... you're after in a reasonable timeframe. Otherwise, some form of hashtable based approach could be useful.
Another answer assuming IEEE float:
int get_bit_index(uint64_t val)
{
union { float f; uint32_t i; } u = { val };
return (u.i>>23)-127;
}
It works as specified for the input values you asked for (exactly 1 bit set) and also has useful behavior for other values (try to figure out exactly what that behavior is). No idea if it's fast or slow; that probably depends on your machine and compiler.
From the GnuChess source:
unsigned char leadz (BitBoard b)
/**************************************************************************
*
* Returns the leading bit in a bitboard. Leftmost bit is 0 and
* rightmost bit is 63. Thanks to Robert Hyatt for this algorithm.
*
***************************************************************************/
{
if (b >> 48) return lzArray[b >> 48];
if (b >> 32) return lzArray[b >> 32] + 16;
if (b >> 16) return lzArray[b >> 16] + 32;
return lzArray[b] + 48;
}
Here lzArray is a pregenerated array of size 2^16. This'll save you 50% of the operations compared to a full binary search.
This is for 32 bit, java, but it should be possible to adapt it to 64 bit.
It assume this will be the fastest cause there is no branching involved.
static public final int msb(int n) {
n |= n >>> 1;
n |= n >>> 2;
n |= n >>> 4;
n |= n >>> 8;
n |= n >>> 16;
n >>>= 1;
n += 1;
return n;
}
static public final int msb_index(int n) {
final int[] multiply_de_bruijn_bit_position = {
0, 1, 28, 2, 29, 14, 24, 3, 30, 22, 20, 15, 25, 17, 4, 8,
31, 27, 13, 23, 21, 19, 16, 7, 26, 12, 18, 6, 11, 5, 10, 9
};
return multiply_de_bruijn_bit_position[(msb(n) * 0x077CB531) >>> 27];
}
Here is more information from: http://graphics.stanford.edu/~seander/bithacks.html#ZerosOnRightMultLookup
// Count the consecutive zero bits (trailing) on the right with multiply and lookup
unsigned int v; // find the number of trailing zeros in 32-bit v
int r; // result goes here
static const int MultiplyDeBruijnBitPosition[32] =
{
0, 1, 28, 2, 29, 14, 24, 3, 30, 22, 20, 15, 25, 17, 4, 8,
31, 27, 13, 23, 21, 19, 16, 7, 26, 12, 18, 6, 11, 5, 10, 9
};
r = MultiplyDeBruijnBitPosition[((uint32_t)((v & -v) * 0x077CB531U)) >> 27];
// Converting bit vectors to indices of set bits is an example use for this.
// It requires one more operation than the earlier one involving modulus
// division, but the multiply may be faster. The expression (v & -v) extracts
// the least significant 1 bit from v. The constant 0x077CB531UL is a de Bruijn
// sequence, which produces a unique pattern of bits into the high 5 bits for
// each possible bit position that it is multiplied against. When there are no
// bits set, it returns 0. More information can be found by reading the paper
// Using de Bruijn Sequences to Index 1 in a Computer Word by
// Charles E. Leiserson, Harald Prokof, and Keith H. Randall.
and as last:
http://supertech.csail.mit.edu/papers/debruijn.pdf

How to convert Google spreadsheet's worksheet string id to integer index (GID)?

To export google spreadsheet's single worksheet to CSV, integer worksheet index(GID) is required to be passed.
https://spreadsheets.google.com/feeds/download/spreadsheets/Export?key=%s&gid=%d&exportFormat=csv
But, where are those informations? With gdata.spreadsheets.client, I could find some string id for worksheet like "oc6, ocv, odf".
client = gdata.spreadsheets.client.SpreadsheetsClient()
feed = client.GetWorksheets(spreadsheet, auth_token=auth_token)
And it returns below atom XML. (part of it)
<entry gd:etag=""URJFCB1NQSt7ImBoXhU."">
<id>https://spreadsheets.google.com/feeds/worksheets/0AvhN_YU3r5e9dGpTWGx3UVU3MTczaXJuNEFKQjMwN2c/ocw</id>
<updated>2012-06-21T08:19:46.587Z</updated>
<app:edited xmlns:app="http://www.w3.org/2007/app">2012-06-21T08:19:46.587Z</app:edited>
<category scheme="http://schemas.google.com/spreadsheets/2006" term="http://schemas.google.com/spreadsheets/2006#worksheet"/>
<title>AchievementType</title>
<content type="application/atom+xml;type=feed" src="https://spreadsheets.google.com/feeds/list/0AvhN_YU3r5e9dGpTWGx3UVU3MTczaXJuNEFKQjMwN2c/ocw/private/full"/>
<link rel="http://schemas.google.com/spreadsheets/2006#cellsfeed" type="application/atom+xml" href="https://spreadsheets.google.com/feeds/cells/0AvhN_YU3r5e9dGpTWGx3UVU3MTczaXJuNEFKQjMwN2c/ocw/private/full"/>
<link rel="http://schemas.google.com/visualization/2008#visualizationApi" type="application/atom+xml" href="https://spreadsheets.google.com/tq?key=0AvhN_YU3r5e9dGpTWGx3UVU3MTczaXJuNEFKQjMwN2c&sheet=ocw"/>
<link rel="self" type="application/atom+xml" href="https://spreadsheets.google.com/feeds/worksheets/0AvhN_YU3r5e9dGpTWGx3UVU3MTczaXJuNEFKQjMwN2c/private/full/ocw"/>
<link rel="edit" type="application/atom+xml" href="https://spreadsheets.google.com/feeds/worksheets/0AvhN_YU3r5e9dGpTWGx3UVU3MTczaXJuNEFKQjMwN2c/private/full/ocw"/>
<gs:rowCount>280</gs:rowCount>
<gs:colCount>28</gs:colCount>
</entry>
Also I tried with sheet parameter but failed with "Invalid Sheet" error.
https://spreadsheets.google.com/feeds/download/spreadsheets/Export?key=%s&sheet=XXX&exportFormat=csv
I guess there should be some magic function but could not find it. How can I convert them to integer id?? Or Can I export worksheet with string id?
EDIT: I just made convert table with python. DIRTY but working :-(
GID_TABLE = {
'od6': 0,
'od7': 1,
'od4': 2,
'od5': 3,
'oda': 4,
'odb': 5,
'od8': 6,
'od9': 7,
'ocy': 8,
'ocz': 9,
'ocw': 10,
'ocx': 11,
'od2': 12,
'od3': 13,
'od0': 14,
'od1': 15,
'ocq': 16,
'ocr': 17,
'oco': 18,
'ocp': 19,
'ocu': 20,
'ocv': 21,
'ocs': 22,
'oct': 23,
'oci': 24,
'ocj': 25,
'ocg': 26,
'och': 27,
'ocm': 28,
'ocn': 29,
'ock': 30,
'ocl': 31,
'oe2': 32,
'oe3': 33,
'oe0': 34,
'oe1': 35,
'oe6': 36,
'oe7': 37,
'oe4': 38,
'oe5': 39,
'odu': 40,
'odv': 41,
'ods': 42,
'odt': 43,
'ody': 44,
'odz': 45,
'odw': 46,
'odx': 47,
'odm': 48,
'odn': 49,
'odk': 50,
'odl': 51,
'odq': 52,
'odr': 53,
'odo': 54,
'odp': 55,
'ode': 56,
'odf': 57,
'odc': 58,
'odd': 59,
'odi': 60,
'odj': 61,
'odg': 62,
'odh': 63,
'obe': 64,
'obf': 65,
'obc': 66,
'obd': 67,
'obi': 68,
'obj': 69,
'obg': 70,
'obh': 71,
'ob6': 72,
'ob7': 73,
'ob4': 74,
'ob5': 75,
'oba': 76,
'obb': 77,
'ob8': 78,
'ob9': 79,
'oay': 80,
'oaz': 81,
'oaw': 82,
'oax': 83,
'ob2': 84,
'ob3': 85,
'ob0': 86,
'ob1': 87,
'oaq': 88,
'oar': 89,
'oao': 90,
'oap': 91,
'oau': 92,
'oav': 93,
'oas': 94,
'oat': 95,
'oca': 96,
'ocb': 97,
'oc8': 98,
'oc9': 99
}
I found your question looking for a solution to the same problem, and was surprised that those worksheet IDs actually correspond 1:1 to gids - I originally assumed they were assigned independently, instead of being an exercise in obfuscation.
I was able to find a slightly cleaner solution by reverse-engineering the formula they use to generate worksheet IDs from your table:
worksheetID = (gid xor 31578) encoded in base 36
So, some Python to go from a worksheet ID to gid:
def to_gid(worksheet_id):
return int(worksheet_id, 36) ^ 31578
This is still dirty, but will work for GIDs higher than 99 without requiring giant tables. At least as long as they don't change the generation logic (which they probably won't, as it would break existing IDs that people already use).
This code works with the new Google Sheets.
// Conversion of Worksheet Ids to GIDs and vice versa
// od4 > 2
function wid_to_gid(wid) {
var widval = wid.length > 3 ? wid.substring(1) : wid;
var xorval = wid.length > 3 ? 474 : 31578;
return parseInt(String(widval), 36) ^ xorval;
}
// 2 > od4
function gid_to_wid(gid) {
var xorval = gid > 31578 ? 474 : 31578;
var letter = gid > 31578 ? 'o' : '';
return letter + parseInt((gid ^ xorval)).toString(36);
}
I cannot add a comment to Wasilewski's post because apparently I lack reputation so here are the two conversion functions in Javascript based on Wasilewski's answer:
// Conversion of Worksheet Ids to GIDs and vice versa
// od4 > 2
function wid_to_gid(wid) {
return parseInt(String(wid),36)^31578
}
// 2> 0d4
function gid_to_wid(gid) {
// (gid xor 31578) encoded in base 36
return parseInt((gid^31578)).toString(36);
}
This is a Java adaptation of Buho's code which works with both the new Google Sheets and with the legacy Google Spreadsheets.
// "od4" to 2 (legacy style)
// "ogtw0h0" to 1017661118 (new style)
public static int widToGid(String worksheetId) {
boolean idIsNewStyle = worksheetId.length() > 3;
// if the id is in the new style, first strip the first character before converting
worksheetId = idIsNewStyle ? worksheetId.substring(1) : worksheetId;
// determine the integer to use for bitwise XOR
int xorValue = idIsNewStyle ? 474 : 31578;
// convert to gid
return Integer.parseInt(worksheetId, 36) ^ xorValue;
}
// Convert 2 to "od4" (legacy style)
// Convert 1017661118 to "ogtw0h0" (new style)
public static String gidToWid(int gid) {
boolean idIsNewStyle = gid > 31578;
// determine the integer to use for bitwise XOR
int xorValue = idIsNewStyle ? 474 : 31578;
// convert to worksheet id, prepending 'o' if it is the new style.
return
idIsNewStyle ?
'o' + Integer.toString((worksheetIndex ^ xorValue), 36):
Integer.toString((worksheetIndex ^ xorValue), 36);
}
This is a Clojure adaptation of Buho's and Julie's code which should work with both the new Google Sheets and with the legacy Google Spreadsheets.
(defn wid->gid [wid]
(let [new-wid? (> (.length wid) 3)
wid (if new-wid? (.substring wid 1) wid)
xor-val (if new-wid? 474 31578)]
(bit-xor (Integer/parseInt wid 36) xor-val)))
(defn gid->wid [gid]
(let [new-gid? (> gid 31578)
xor-val (if new-gid? 474 31578)
letter (if new-gid? "o" "")]
(str letter (Integer/toString (bit-xor gid xor-val) 36))))
If you're using Python with gspread, here's what you do:
wid = worksheet.id
widval = wid[1:] if len(wid) > 3 else wid
xorval = 474 if len(wid) > 3 else 31578
gid = int(str(widval), 36) ^ xorval
I'll probably open a PR for this.

Resources