Google Sheet merged cells create space - google-sheets
I got a sheet with merged cells and in those merged cells, a script write its results.
When I copy this result in the merged cells, it gave me multiple spaces at the end.
Like : Result #1________ (« _ » represent invisible space)
When I put the same result in a normal cell (not merged), it doesn’t put any space at the end.
Result #1
I tried multiple cell format (Center aligned, left aligned, etc.) but nothing changed.
Do you have any idea why ?
Thanks !
EDIT : add script
Script
function Devise() {
const sheetName = "Missions";
const sheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName(sheetName);
var Devise = "";
var NombreMission = "";
var NomOperateurs = "";
if(sheet.getRange("H2").getValue()=="") { // Si la mission 7 est vide
NombreMission = 6; // On compte seulement 6 missions
} else {
NombreMission = 7; // Sinon on compte 7 missions
}
for (i = 1; i < NombreMission+1; i++) { // Boucle FOR pour NombreMission missions
if(sheet.getRange(2,i+1).getValue()=="") { continue; } // Si la mission est vide, on la passe
Devise = Devise + i + "/";
l = 0; // Variable pour indiquer "Rien" si personne à placer dans la mission
NomOperateurs = ""; // Reset les noms pour la mission d'après
for (j = 1; j < 27+1; j++) { // Boucle FOR pour tous les opérateurs
if(sheet.getRange(j+2,i+1).getFontWeight() == 'bold') { // Vérifie si la case est en gras
/*if(i!=NombreMission) { // Tant qu'il ne s'agit pas de la dernière mission ...
Devise = Devise + sheet.getRange(j+2,1).getValue() + " "; // ... on affiche les opérateurs
}*/
NomOperateurs = NomOperateurs + sheet.getRange(j+2,1).getValue() + " ";
l = l + 1; // On compte les opérateurs
}
} // Fin Boucle FOR opérateurs
if (l==24) { // S'il y a tous les operateurs sur une mission...
Devise = Devise + "ALL OPs! " // ... On affiche "All Op!"
} else if (i==NombreMission && l!=0) { // Sinon s'il s'agit de la dernière mission et qu'il reste des opérateurs à placer...
Devise = Devise + "Autres + Epic "; // ... On indique qu'il s'agit du reste et des épiques
} else if (l==0) { // Sinon s'il n'y a aucun opérateurs à placer...
Devise = Devise + "RIEN " // ... On indique "RIEN"
} else { // Sinon ...
Devise = Devise + NomOperateurs; // ... On affiche les opérateurs
}
} // FIN BOUCLE FOR NombreMission
if(NombreMission==6 && Devise!="") { Devise = Devise + "7/!NOTHING!";}
sheet.getRange("K13").setValue(Devise);
}
Your problem is related to the data you copied and the way that you copied it as pasting text in merged cells doesn't create any new lines.
Also, an important thing to keep in mind is that CTRL+ENTER creates the mentioned space also known as a line break.
So, for example, if this cell contains the text Text + line break:
And the text from the above cell is copied and pasted into a merged cell it will look like this - which is the same outcome as the one that you have mentioned:
But if you paste the same text to a simple cell, it will look like this:
This is essentially because the line break will signify the start of a new cell.
For example, this cell contains this text with line breaks:
After the text is copied and pasted into a different cell, this is how it will actually be pasted as:
In order to solve your issue, I suggest you to copy only the text needed and if possible to avoid using any line breaks.
Reference
Edit and Format a Spreadsheet.
Encountering the same issue.
I have a merged cell with text. If I select the cell and paste it into notepad, It includes quite a lot of white space.
I've checked and if the merged cell spans two rows, the white space includes a line break.
If the merged cell spans one row but two columns, the white space does not include a line break.
If I have a single cell and have it take its value from the mered cell "=A1", the text does not include the white space.
So the addition of the whitespace is definitely the result of having a merged cell.
Related
BLE write characteristic ios
It's me again I got connection between a mini thermal printer and a iOS device, everything is fine except for the number of letters that it prints. If a want to print a long string it just print a few of them.And the worst is this. For example if i want to print this "aaaaa1aaaa1" and this "bbbbb2bbbb2" the result is "aaaaa1" "aaaa1b" "bbbb2b" "bbbb2" (each block is separated) this is the code for the print button #IBAction func btnImprimironClick(_ sender: Any) { let mensaje = "-----Guillermo Celi (CREO-SUMA) dijo que el primer mandatario está dentro del plazo para remitir un alcance al veto parcial, y pueda “objetar la creación de los cuerpos de seguridad para la custodia de burócratas”. Explicó que su bancada está en contra de ese capítulo del proyecto porque es inconstitucional, y que esa tarea le corresponde a las Fuerzas Armadas y la Policía Nacional. La comisión de Soberanía y Asuntos Internacionales, presidida por Doris Soliz (AP), se allanó al veto parcial del Ejecutivo. Ella señaló que lo único que cabe es acoger el informe pese al pedido de la oposición." var datos = mensaje.data(using: .utf8)! self.printer.writeValue(datos , for: characteristic1, type: CBCharacteristicWriteType.withoutResponse) Please help me with this, could be something wrong whit the code or the utf8 Thanks.
For some reason it's about the coding format, first I use the utf8 because its a standard but I change to macOSRoman and work perfectly. static func printText(text: String) { let text = "Some long paragraph in spanish version." let byteArray = text.data(using:String.Encoding.macOSRoman) self.printer.writeValue(byteArray! , for: characteristic1, type: CBCharacteristicWriteType.withoutResponse) }
Python splitting text returns a str and a list of str
I wonder whether someone can help me with the syntax to split my text file into key, value pairs. Abbasso: termine con cui si indicano gli ambienti situati sotto il ponte di coperta. Abbattuta: manovra che consiste nel puggiare sino a fare prendere il vento alle vele sulle mure opposte. Abbisciare: (fr.: prendre la biture; ingl.: to coil) stendere un cavo o una catena come fosse una biscia in modo da evitare che si imbrogli successivamente, quando sarà posto in opera. Abbordo: (fr.: abordage; ingl.: collision) collisione in mare. Sinonimo, poco usato, di accosto e di abbordaggio. Abbrivo: (fr.: erre; ingl.: way-on) inerzia dell'imbarcazione a continuare nel suo movimento anche quando è cessata la spinta propulsiva, sia essa a vela che a motore. Abbuono: (fr.: bonification, rating; ingl.: rating) compenso: (o vantaggio) dato ad una imbarcazione per permetterle di gareggiare più equamente: (ad esempio abbuono per anzianità di costruzione dello scafo). My function at the minute gives me a key str, but a value type(list). Instead I want the value also to be a str. I get what my problem is that what should a be the value is splitting on every colon instead of only on the leftmost colon. def create_dict(): eng_fr_it_dict={} f_name = "dizionario_della_vela.txt" handle = open(f_name, encoding = 'utf8') for line in handle: #print(line) if line.startswith(" ") : continue line.lstrip() terms = line.split(": ") #print(terms[1:]) term = terms[0].lstrip() expan = terms[1:] print(type(term), type(expan)) eng_fr_it_dict[term] = eng_fr_it_dict.get(term, expan) with open("eng_fr_it_dict.txt", "wb") as infile: pickle.dump(eng_fr_it_dict,infile) print(eng_fr_it_dict) Can you suggest a cleverer way to do this or will I have to work out how to covert the list of str to a single str? I thought that there was a split in-built function, but obviously not
file = open("dizionario_della_vela.txt", "r") data = file.read() file.close() data = data.split("\n") # getting every line as seperate list myDict = {} for line in data: line = line.split(":") key = line[0] # getting first element as key value = ":".join(line[1:]) # joins elements (starting with second) with # ":". We need this because previous line # was splitted by ":" to get your key. This # is where "string" value is produced. myDict[key] = value for key in myDict.keys(): print(myDict[key])
You are not allowed to call openByUrl. OAuth 2.0?
You are not allowed to call openByUrl. OAuth 2.0 You are not allowed to call openByUrl. OAuth 2.0 I executes a script "A" in google text document "document" to retrieve the text of the document "Name Surname" I sent in a paper google spreadsheet "Spreadsheet" with: SpreadsheetApp.openByUrl ("") The Script "A" is in a Library This works well with documents whose script A "board" is OAuth 1.0, for against it does not work with the script A "board" is OAuth 2.0. Error message referring me: You are not allowed to call openByUrl. What should I do to run my new script OAuth 2.0 What should replace it by calling the SpreadsheetApp SpreadsheetApp.openByUrl ("")? var ss =SpreadsheetApp.openByUrl("https://docs.google.com/spreadsheets/d/19rWt8JEGbYM29-W4tI2gJHFDBHJj6peX1kjfdhlkskjFt2gFkU/edit#gid=3950BSV3"); here is the complete script up in the Library that runs perfectly if the document was created with OAuth 1.0 but do not work if the document was created with OAuth 2.0 . ///// Facturer Acte //// function FacturerActe() { var regexp = /[^0-9]*/g ;// extrait la chaine de caractère avant la chaine numérique var doc = DocumentApp.getActiveDocument().getText(); var result = regexp.exec(doc); var PrenomNom = new RegExp(result,"gm"); Logger.log(PrenomNom.getText); var ss = SpreadsheetApp.openByUrl("https://docs.google.com/spreadsheets/d/19rWt8JEGbYM29-W4tI2gj6peXR3hjvj51FxDFt2gFkU/edit#gid=395019283"); var date = Utilities.formatDate(new Date(), ss.getSpreadsheetTimeZone() , "d"+"-"+"mm"+"-"+"y"); var sheet = ss.getSheetByName(date); ss.setActiveSheet(sheet); var cell = sheet.getRange("A40"); cell.setNote("Aujourd'hui est un nouveau jour ! Nous sommes le :"+date); selectFirstEmptyRow (); // Place le curseur sur la premiere ligne Vide de la Colonne "B" } //* Placez le curseur de l'utilisateur actuel dans la première cellule de la première ligne vide. //* function selectFirstEmptyRow () { var ss = SpreadsheetApp.openByUrl("https://docs.google.com/spreadsheets/d/19rWt8JEGbYM29-W4tI2gj6peXR3hjvj51FxDFt2gFkU/edit#gid=395019283"); var date = Utilities.formatDate(new Date(), ss.getSpreadsheetTimeZone() , "d"+"-"+"mm"+"-"+"y"); var sheet = ss.getSheetByName(date); var regexp = /[^0-9]*/g ;// extrait la chaine de caractère avant la chaine numérique var doc = DocumentApp.getActiveDocument().getText(); var result = regexp.exec(doc); var regexp = /\s[A-Z a-z]+/g ;// extrait les espaces devant et derriere Nom Prenom //* Extrait les blancs var result = regexp.exec(result); /// var result = result.replace(/^[\r\n]+|\.|[\r\n]+$/g, "");// extrait les espaces devant et derriere Nom Prenom GAS D'ONT WORK sheet . setActiveSelection ( sheet . getRange ( "B" + getFirstEmptyRowWholeRow ())).setValue(result) ; Logger.log(result.getText); } /** * " Trouve la première ligne vide la Colonne "B" " de checker de Mogsdad. */ function getFirstEmptyRowWholeRow () { var ss = SpreadsheetApp.openByUrl("https://docs.google.com/spreadsheets/d/19rWt8JEGbYM29-W4tI2gj6peXR3hjvj51FxDFt2gFkU/edit#gid=395019283"); var date = Utilities.formatDate(new Date(), ss.getSpreadsheetTimeZone() , "d"+"-"+"mm"+"-"+"y"); var sheet = ss.getSheetByName(date); var range = sheet . getDataRange (); var values = range . getValues (); var row = 1 ; for ( var row = 1 ; row < values . length ; row ++) { if (! values [ row ]. join ( "" )) break ; } return ( row + 1 ); } ///// Fin Facturer Acte ////
Powershell script for data parsing
I am looking for a Powershell script that could help me for this task: Got data like this: "No.","time","1-1","1-2","1-3","1-4","1-5","1-6","1-7","1-8","1-9","1-10","1-11","1-12","1-13","1-14","1-15","5-1","5-2","5-3","5-4","5-5","5-6","5-7","5-8","5-9","5-11","5-13","5-15","9-1","9-3","9-5","9-7","9-8","9-9","13-1","13-2","13-3","13-4","13-5","13-6","13-7","13-8","13-9","13-10","17-1","17-2","17-3","17-4","17-5","17-6","17-7","17-8","17-9","E1-1","00:FE:FFX(X2049-1)","00:00:8DX(X2050-1)","00:00:8CX(X2051-1)","00:00:8BX(X2052-1)","00:00:8EX(X2053-1)","00:00:8FX(X2054-1)","00:00:97X(X2055-1)","00:00:96X(X2056-1)","00:00:92X(X2057-1)","00:00:99X(X2058-1)","00:00:98X(X2059-1)","00:00:94X(X2060-1)","00:00:93X(X2061-1)","00:00:90X(X2062-1)","00:00:95X(X2063-1)","00:00:91X(X2064-1)","00:00:9FX(X2065-1)","00:00:9CX(X2066-1)","00:00:A0X(X2067-1)","00:00:A1X(X2068-1)","00:00:9AX(X2069-1)","00:00:9EX(X2070-1)","00:00:A5X(X2071-1)","00:00:A3X(X2072-1)","00:00:A4X(X2073-1)","00:00:9BX(X2074-1)","00:00:A2X(X2075-1)","00:02:D2X(X2076-1)","00:00:A6X(X2077-1)","00:00:A7X(X2078-1)","00:01:0CX(X2079-1)","00:60:48X(X2080-1)","00:00:B2X(X2081-1)","00:02:B4X(X2082-1)","00:02:43X(X2083-1)","00:00:AEX(X2084-1)","00:00:ADX(X2085-1)","00:02:E4X(X2086-1)","00:02:BDX(X2087-1)","00:00:B1X(X2088-1)","00:00:DFX(X2089-1)","00:00:B3X(X2090-1)","00:60:40X(X2091-1)","00:60:41X(X2092-1)","00:00:B5X(X2093-1)","00:00:B7X(X2094-1)","00:00:C3X(X2095-1)","00:60:42X(X2096-1)","00:00:C9X(X2097-1)","00:00:C2X(X2098-1)","00:00:C1X(X2099-1)","00:00:C4X(X2100-1)","00:00:B4X(X2101-1)","00:00:2FX(X2102-1)","00:00:BAX(X2103-1)","00:00:B6X(X2104-1)","00:00:BFX(X2105-1)","00:00:C8X(X2106-1)","00:00:D3X(X2107-1)","00:00:B8X(X2108-1)","00:00:C5X(X2109-1)","00:00:CFX(X2110-1)","00:00:CAX(X2111-1)","00:00:CCX(X2112-1)","00:60:43X(X2113-1)","00:00:D9X(X2114-1)","00:00:BCX(X2115-1)","00:00:A8X(X2116-1)","00:00:C7X(X2117-1)","00:00:D0X(X2118-1)","00:00:BBX(X2119-1)","00:01:3BX(X2120-1)","00:01:3EX(X2121-1)","00:00:BEX(X2122-1)","00:00:BDX(X2123-1)" "1","2013/11/04 15:45",0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-5,-5,-5,-5,-5,-5,-5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,-1,-1,-5,-5,-5,1550,0,1010,58,81,73,197,91,275,286,378,44,58,101,140,41,66,144,107,62,17,36,8,46,76,98,-5,130,217,-5,-5,0,-5,-5,0,0,-5,-5,144,0,5,-5,-5,15,281,2859,-5,1,442,724,13,12,880,97,171,130,30,0,49,15,0,82,12,-5,0,443,0,55,64,1269,-5,-5,41,172 "2","2013/11/04 15:46",0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-5,-5,-5,-5,-5,-5,-5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,-1,-1,-5,-5,-5,1710,0,903,57,91,42,172,95,609,281,274,34,126,384,254,39,49,315,90,46,20,197,8,71,61,89,-5,247,220,-5,-5,0,-5,-5,0,0,-5,-5,126,0,12,-5,-5,16,258,3298,-5,4,647,716,1,9,868,101,208,26,30,0,53,17,0,89,9,-5,0,448,0,36,68,1394,-5,-5,39,67 "3","2013/11/04 15:47",0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-5,-5,-5,-5,-5,-5,-5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,-1,-1,-5,-5,-5,1548,0,853,55,91,71,193,145,103,269,272,38,77,142,184,39,180,796,85,44,18,517,7,101,64,88,-5,549,138,-5,-5,0,-5,-5,0,0,-5,-5,156,0,3,-5,-5,22,260,2496,-5,18,448,620,15,6,789,194,239,66,96,0,31,13,0,164,8,-5,0,344,0,33,55,1121,-5,-5,72,121 "4","2013/11/04 15:48",0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-5,-5,-5,-5,-5,-5,-5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,-1,-1,-5,-5,-5,1558,0,874,34,76,38,201,550,113,288,158,18,64,116,458,42,51,127,90,44,16,50,6,69,66,102,-5,116,294,-5,-5,0,-5,-5,0,0,-5,-5,116,0,1,-5,-5,7,210,3038,-5,5,81,553,5,6,834,53,248,26,88,0,36,17,0,17,9,-5,0,78,0,206,55,1450,-5,-5,45,92 "5","2013/11/04 15:49",0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-5,-5,-5,-5,-5,-5,-5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,-1,-1,-5,-5,-5,1620,0,900,39,88,37,229,171,211,311,264,23,104,128,506,42,201,50,98,46,19,62,6,61,59,102,-5,102,306,-5,-5,0,-5,-5,0,0,-5,-5,126,0,3,-5,-5,16,241,3235,-5,11,353,740,8,8,818,68,244,24,111,0,21,14,0,19,10,-5,0,91,0,93,63,1567,-5,-5,50,103 "6","2013/11/04 15:50",0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-5,-5,-5,-5,-5,-5,-5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,-1,-1,-5,-5,-5,1745,0,907,44,83,37,189,213,293,265,130,47,68,514,222,42,106,142,92,62,18,338,6,49,79,88,-5,140,231,-5,-5,0,-5,-5,0,0,-5,-5,135,0,5,-5,-5,43,376,3095,-5,1,300,656,1,9,790,91,263,54,103,0,29,14,0,15,11,-5,0,91,0,81,58,1579,-5,-5,57,104 "7","2013/11/04 15:51",0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-5,-5,-5,-5,-5,-5,-5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,-1,-1,-5,-5,-5,1786,0,972,45,84,55,195,110,798,324,150,31,191,1406,332,1126,225,60,87,57,70,203,7,45,62,81,-5,112,235,-5,-5,0,-5,-5,0,0,-5,-5,121,0,60,-5,-5,4,354,3378,-5,2,421,629,2,136,737,81,196,128,92,0,21,16,0,18,13,-5,0,71,0,90,55,1184,-5,-5,41,170 "8","2013/11/04 15:52",0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-5,-5,-5,-5,-5,-5,-5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,-1,-1,-5,-5,-5,1704,0,928,31,87,38,199,111,286,341,195,24,299,1065,292,329,60,54,87,45,18,54,6,67,72,89,-5,102,204,-5,-5,0,-5,-5,0,0,-5,-5,172,0,22,-5,-5,5,494,3337,-5,9,169,792,6,15,764,159,227,45,92,0,36,16,0,16,11,-5,0,78,0,93,65,1706,-5,-5,61,81 "9","2013/11/04 15:53",0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-5,-5,-5,-5,-5,-5,-5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,-1,-1,-5,-5,-5,1494,0,857,28,112,47,188,649,111,318,153,21,87,445,288,34,45,52,87,44,29,94,10,61,74,98,-5,152,129,-5,-5,0,-5,-5,0,0,-5,-5,172,0,1,-5,-5,10,324,3371,-5,1,46,625,3,7,824,54,216,25,85,0,34,17,0,34,12,-5,0,85,0,104,66,1578,-5,-5,32,40 "10","2013/11/04 15:54",0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-5,-5,-5,-5,-5,-5,-5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,-1,-1,-5,-5,-5,1565,0,850,80,116,38,217,98,127,370,329,174,96,251,184,37,107,66,380,43,18,92,8,41,65,96,-5,104,231,-5,-5,0,-5,-5,0,0,-5,-5,162,0,2,-5,-5,6,272,3743,-5,11,314,545,7,5,962,66,5,20,28,0,13,15,0,17,11,-5,0,40,0,149,65,1419,-5,-5,31,63 "11","2013/11/04 15:55",0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-5,-5,-5,-5,-5,-5,-5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,-1,-1,-5,-5,-5,1650,0,841,55,77,37,168,80,133,291,286,17,64,138,152,43,57,936,97,57,16,112,8,52,72,103,-5,134,407,-5,-5,0,-5,-5,0,0,-5,-5,129,0,5,-5,-5,2,274,3401,-5,3,297,522,2,8,805,96,5,23,23,0,16,14,0,15,12,-5,0,37,0,186,74,1623,-5,-5,14,45 "12","2013/11/04 15:56",0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-5,-5,-5,-5,-5,-5,-5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,-1,-1,-5,-5,-5,1471,0,826,42,81,38,162,92,477,284,191,32,68,130,144,45,66,244,100,63,16,146,14,139,102,96,-5,104,302,-5,-5,0,-5,-5,0,0,-5,-5,127,0,10,-5,-5,8,298,3363,-5,2,440,582,3,18,1010,79,8,68,19,0,14,15,0,15,11,-5,0,45,0,129,68,1539,-5,-5,4,93 "13","2013/11/04 15:57",0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-5,-5,-5,-5,-5,-5,-5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,-1,-1,-5,-5,-5,1035,0,1002,39,308,36,226,101,104,269,185,24,91,122,137,46,140,59,87,49,18,273,7,156,75,87,-5,113,145,-5,-5,0,-5,-5,0,0,-5,-5,202,0,3,-5,-5,6,214,3794,-5,9,192,500,4,18,1095,161,90,142,84,0,15,15,0,25,17,-5,0,59,0,207,59,1563,-5,-5,29,164 "14","2013/11/04 15:58",0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-5,-5,-5,-5,-5,-5,-5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,-1,-1,-5,-5,-5,707,0,968,33,230,37,179,139,138,303,255,21,92,104,161,234,67,55,100,43,18,168,6,145,87,93,-5,126,294,-5,-5,0,-5,-5,0,0,-5,-5,140,0,2,-5,-5,13,305,3448,-5,1,262,648,4,30,928,58,281,51,163,0,19,18,0,40,17,-5,0,155,0,90,50,1631,-5,-5,15,60 "15","2013/11/04 15:59",0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-5,-5,-5,-5,-5,-5,-5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,-1,-1,-5,-5,-5,761,0,954,58,103,41,176,79,107,310,109,20,86,142,146,846,51,68,91,50,18,184,6,45,71,96,-5,109,142,-5,-5,0,-5,-5,0,0,-5,-5,254,0,3,-5,-5,6,276,3513,-5,3,171,545,4,4,958,51,91,34,60,0,27,16,0,22,12,-5,0,62,0,91,52,1651,-5,-5,12,51 "No.","time","00:01:3CX(X2124-1)","00:00:C0X(X2125-1)","00:00:C6X(X2126-1)","00:01:04X(X2127-1)","00:01:08X(X2128-1)","00:00:DBX(X2129-1)","00:01:B9X(X2130-1)","00:00:DDX(X2131-1)","00:00:DCX(X2132-1)","00:01:64X(X2133-1)","00:00:E0X(X2134-1)","00:00:E1X(X2135-1)","00:00:E2X(X2136-1)","00:00:E6X(X2137-1)","00:00:E8X(X2138-1)","00:00:E5X(X2139-1)","00:00:E4X(X2140-1)","00:00:E3X(X2141-1)","00:00:E7X(X2142-1)","00:00:E9X(X2143-1)","00:00:CEX(X2144-1)","00:00:D8X(X2145-1)","00:00:AAX(X2146-1)","00:00:EDX(X2147-1)","00:60:3FX(X2148-1)","00:00:F7X(X2149-1)","00:00:31X(X2150-1)","00:00:D6X(X2151-1)","00:00:D7X(X2152-1)","00:00:EEX(X2153-1)","00:00:EFX(X2154-1)","00:60:46X(X2155-1)","00:00:F0X(X2156-1)","00:00:F1X(X2157-1)","00:00:ECX(X2158-1)","00:00:F3X(X2159-1)","00:00:EBX(X2160-1)","00:00:F4X(X2161-1)","00:00:32X(X2162-1)","00:01:86X(X2163-1)","00:00:2BX(X2164-1)","00:02:10X(X2165-1)","00:02:11X(X2166-1)","00:00:2CX(X2167-1)","00:01:0AX(X2168-1)","00:01:0BX(X2169-1)","00:00:A9X(X2170-1)","00:60:02X(X2171-1)","00:60:01X(X2172-1)","00:60:03X(X2173-1)","00:60:04X(X2174-1)","00:60:05X(X2175-1)","00:60:06X(X2176-1)","00:60:07X(X2177-1)","00:60:08X(X2178-1)","00:60:09X(X2179-1)","00:60:0AX(X2180-1)","00:60:00X(X2181-1)","00:60:3EX(X2182-1)","00:01:06X(X2183-1)","00:01:0DX(X2184-1)","00:01:07X(X2185-1)","00:01:05X(X2186-1)","00:02:7BX(X2187-1)","00:02:7CX(X2188-1)","00:02:B5X(X2189-1)","00:02:E5X(X2190-1)","00:02:0FX(X2191-1)","00:01:0EX(X2192-1)","00:01:11X(X2193-1)","00:01:14X(X2194-1)","00:01:10X(X2195-1)","00:01:12X(X2196-1)","00:01:13X(X2197-1)","00:01:09X(X2198-1)","00:00:FBX(X2199-1)","00:00:33X(X2200-1)","00:01:0FX(X2201-1)","00:01:27X(X2202-1)","00:01:15X(X2203-1)","00:01:1DX(X2204-1)","00:01:1BX(X2205-1)","00:01:1AX(X2206-1)","00:01:1CX(X2207-1)","00:02:4CX(X2208-1)","00:01:39X(X2209-1)","00:01:16X(X2210-1)","00:01:38X(X2211-1)","00:02:E7X(X2212-1)","00:01:18X(X2213-1)","00:00:FEX(X2214-1)","00:01:19X(X2215-1)","00:00:FDX(X2216-1)","00:00:FFX(X2217-1)","00:01:29X(X2218-1)","00:01:28X(X2219-1)","00:01:17X(X2220-1)","00:01:2DX(X2221-1)","00:01:2EX(X2222-1)","00:01:2FX(X2223-1)","00:01:2BX(X2224-1)","00:01:2CX(X2225-1)","00:60:0BX(X2226-1)","00:02:07X(X2227-1)","00:60:0FX(X2228-1)","00:60:0CX(X2229-1)","00:60:0DX(X2230-1)","00:01:00X(X2231-1)","00:01:4CX(X2232-1)","00:01:56X(X2233-1)","00:01:61X(X2234-1)","00:01:4EX(X2235-1)","00:01:55X(X2236-1)","00:01:58X(X2237-1)","00:01:59X(X2238-1)","00:01:52X(X2239-1)","00:01:5DX(X2240-1)","00:01:60X(X2241-1)","00:01:4DX(X2242-1)","00:01:5AX(X2243-1)","00:01:54X(X2244-1)","00:01:46X(X2245-1)","00:01:5EX(X2246-1)","00:01:5CX(X2247-1)","00:01:49X(X2248-1)","00:01:4AX(X2249-1)","00:01:50X(X2250-1)","00:01:4BX(X2251-1)" "1","2013/11/04 15:45",-5,9,62,-5,-5,0,-5,0,0,-5,7,0,0,40,21,55,21,79,24,203,3,0,88,51,-5,0,2,272,15,1967,51,-5,61,58,31,243,24,0,3,-5,0,-5,-5,13,-5,-5,0,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,0,0,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,0,-5,1,0,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5 "2","2013/11/04 15:46",-5,10,47,-5,-5,0,-5,0,0,-5,7,0,0,45,24,68,25,94,24,185,3,0,93,40,-5,0,3,285,116,2195,75,-5,117,70,41,216,27,0,3,-5,0,-5,-5,13,-5,-5,0,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,9,0,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,35,-5,24,0,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5 "3","2013/11/04 15:47",-5,19,111,-5,-5,0,-5,0,0,-5,2,0,0,44,30,62,24,91,32,190,1,0,93,121,-5,0,3,346,283,1534,10,-5,93,29,32,218,14,0,3,-5,0,-5,-5,12,-5,-5,0,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,34,0,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,125,-5,74,0,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5 ... ... etc etc ... Its a CSV file which seems to be split when the number of columns > 130: next columns are added to the file with new lines. I don't know the number of columns which is dynamic, but I always have 1 Header Line + 15 Results followed by 1 Header Line + 15 Results and so on a certain number of times. What I'm looking is reserve this split thing and have one correct CSV file that I can add into Splunk later. That means EACH LINE HAVE ONE DISTINCT TIME (And No.) so my new file much have only 16 lines. (1 Header Line + 15 Results, 1 by minute) So I need to append : all the (16+1)*n lines (n belongs to 1,EndOfFile) to the 1rst line without the first 2 columns (No and Time are the same) all the (16+2)*n lines to the 2nd line without the first 2 columns (No and Time are the same) all the (16+3)*n lines to the 3nd line without the first 2 columns (No and Time are the same) etc etc... If someone can help me with this script that would be awesome ! EDIT: Here's where I am but no success: Import-Csv .\data.txt |Group-Object -Property No.,time |% { $text = $_.name+"," $text += ($_.group | % {$i=0;$j=$_.count}{$i++ ; ($_|%{$_.toString() + ","})*($j-$i -gt 0)}) $text += "`n" Write-Output $text } EDIT 2: My problem is that I got an hashtable but I don't know any Names to get all the elements. I tried with getEnumerator() without success : Method invocation failed because [System.Management.Automation.PSCustomObject] doesn't contain a method named 'getEnumerator'. Import-Csv .\data.txt |Group-Object -Property No.,time |% { $text = $_.name+"," $text += ($_.group | % {$i=0;$j=$_.count}{$i++ ; ($_.GetEnumerator()|%{$_ + ","})*($j-$i -gt 0)}) $text += "`n" Write-Output $text } If I put a column name like "1-1" instead of getEnumerator() its working but I can't do that for all columns since I don't know the names.
I used get-content and array treatment its a lot more easier than trying to get these ** cmd-let working... $ofs = ',' # ! Variable Interne utilisée pour le cast de [string] venant d'un [array], définit le séparateur #Lecture des fichiers en entrée $txt = gc .\data.txt if($txt -is [system.array]){ #Declaration des variables $res = #() $count = 0 #Traitement foreach ($line in $txt) { #On traite toutes les lignes du fichier $count++ #On incremente le compteur de lignes If($count-le 16) { #Les 16 premieres lignes [1 Header + 15 Datas row] $res += $line+"," #sont copiées telles quelles } else { #Pour toutes les lignes suivantes [17,+inf] $newline = $line.Split(',')[2..500] #On supprime leur deux premières colonnes $res[($count%16)-1] += [string] $newline #On les ajoute aux 16 premières lignes avec Mod[16] } } #Ecriture des fichiers de sortie $res }
Plone/Dexterity schema.Choice not allowing Spanish characters
In Plone 4.1.2 I created a myContentType with Dexterity. It has 3 zope.schema.Choice fields. Two of them take their values from a hardcoded vocabulary and the other one from a dynamic vocabulary. In both cases, if I choose a value that has Spanish accents, when I save the add form the selection is gone and doesn't show up in the view form (without showing any error message). But if I choose a non accented value everything works fine. Any advise on how to solve this problem? (David; I hope this is what you asked me for) # -*- coding: utf-8 -*- from five import grok from zope import schema from plone.directives import form, dexterity from zope.component import getMultiAdapter from plone.namedfile.interfaces import IImageScaleTraversable from plone.namedfile.field import NamedBlobFile, NamedBlobImage from plone.formwidget.contenttree import ObjPathSourceBinder from zope.schema.vocabulary import SimpleVocabulary, SimpleTerm from zope.schema.interfaces import IVocabularyFactory from z3c.formwidget.query.interfaces import IQuerySource from zope.component import queryUtility from plone.formwidget.masterselect import ( _, MasterSelectField, MasterSelectBoolField, ) from plone.app.textfield.interfaces import ITransformer from plone.indexer import indexer from oaxaca.newcontent import ContentMessageFactory as _ from oaxaca.newcontent.config import OAXACA from types import UnicodeType _default_encoding = 'utf-8' def _encode(s, encoding=_default_encoding): try: return s.encode(encoding) except (TypeError, UnicodeDecodeError, ValueError): return s def _decode(s, encoding=_default_encoding): try: return unicode(s, encoding) except (TypeError, UnicodeDecodeError, ValueError): return s view = view.encode('utf-8') def getSlaveVocab(master): results = [] if master in OAXACA: results = sorted(OAXACA[master]) return SimpleVocabulary.fromValues(results) class IFicha(form.Schema, IImageScaleTraversable): """Describes a ficha """ tipoMenu = schema.Choice( title=_(u"Tipo de evento"), description=_(u"Marque la opción que aplique o " "seleccione otro si ninguna aplica"), values=( u'Manifestación en lugar público', u'Toma de instalaciones municipales', u'Toma de instalaciones estatales', u'Toma de instalaciones federales', u'Bloqueo de carretera municipal', u'Bloqueo de carretera estatal', u'Bloqueo de carretera federal', u'Secuestro de funcionario', u'Otro',), required=False, ) tipoAdicional = schema.TextLine( title=_(u"Registre un nuevo tipo de evento"), description=_(u"Use este campo solo si marcó otro en el menú de arriba"), required=False ) fecha = schema.Date( title=_(u"Fecha"), description=_(u"Seleccione el día en que ocurrió el evento"), required=False ) municipio = MasterSelectField( title=_(u"Municipio"), description=_(u"Seleccione el municipio donde ocurrió el evento"), required=False, vocabulary="oaxaca.newcontent.municipios", slave_fields=( {'name': 'localidad', 'action': 'vocabulary', 'vocab_method': getSlaveVocab, 'control_param': 'master', }, ) ) localidad = schema.Choice( title=_(u"Localidad"), description=_(u"Seleccione la localidad donde ocurrió el evento."), values=[u'',], required=False, ) actores = schema.Text( title=_(u"Actores"), description=_(u"Liste las agrupaciones y los individuos que participaron en el evento"), required=False, ) demandas = schema.Text( title=_(u"Demandas"), description=_(u"Liste las demandas o exigencias de los participantes"), required=False ) depResponsable = schema.Text( title=_(u"Dependencias"), description=_(u"Liste las dependencias gubernamentales responsables de atender las demandas"), required=False ) seguimiento = schema.Text( title=_(u"Acciones de seguimiento"), description=_(u"Anote cualquier accion de seguimiento que se haya realizado"), required=False ) modulo = schema.Choice( title=_(u"Informa"), description=_(u"Seleccione el módulo que llena esta ficha"), values=( u'M1', u'M2', u'M3', u'M4', u'M5', u'M6', u'M7', u'M8', u'M9', u'M10', u'M11', u'M12', u'M13', u'M14', u'M15', u'M16', u'M17', u'M18', u'M19', u'M20', u'M21', u'M22', u'M23', u'M24', u'M25', u'M26', u'M27', u'M28', u'M29', u'M30',), required=False ) imagen1 = NamedBlobImage( title=_(u"Imagen 1"), description=_(u"Subir imagen 1"), required=False ) imagen2 = NamedBlobImage( title=_(u"Imagen 2"), description=_(u"Subir imagen 2"), required=False ) anexo1 = NamedBlobFile( title=_(u"Anexo 1"), description=_(u"Subir archivo 1"), required=False ) anexo2 = NamedBlobFile( title=_(u"Anexo 2"), description=_(u"Subir archivo 2"), required=False ) #indexer(IFicha) def textIndexer(obj): """SearchableText contains fechaFicha, actores, demandas, municipio and localidad as plain text. """ transformer = ITransformer(obj) text = transformer(obj.text, 'text/plain') return '%s %s %s %s %s' % (obj.fecha, obj.actores, obj.demandas, obj.municipio, obj.localidad) grok.global_adapter(textIndexer, name='SearchableText') class View(grok.View): """Default view (called "##view"") for a ficha. The associated template is found in ficha_templates/view.pt. """ grok.context(IFicha) grok.require('zope2.View') grok.name('view')
I found the same problem some months ago on early development of collective.nitf. The tokens on a vocabulary must be normalized; this is how I solved it: # -*- coding: utf-8 -*- import unicodedata … class SectionsVocabulary(object): """Creates a vocabulary with the sections stored in the registry; the vocabulary is normalized to allow the use of non-ascii characters. """ grok.implements(IVocabularyFactory) def __call__(self, context): registry = getUtility(IRegistry) settings = registry.forInterface(INITFSettings) items = [] for section in settings.sections: token = unicodedata.normalize('NFKD', section).encode('ascii', 'ignore').lower() items.append(SimpleVocabulary.createTerm(section, token, section)) return SimpleVocabulary(items) grok.global_utility(SectionsVocabulary, name=u'collective.nitf.Sections')
Plone uses gettext for internationalization. The bulletproof approach would be to implement your custom functionality in English and use locales for your specific language. Look at the relevant parts of the community manual on how this is done. Since you already setup a MessageFactory you could even use a tool like e.g. zettwerk.i18nduder for quick extraction of message strings.
I found a partial explanation/solution here. I can get the Spanish characters in the view form if i do: -- coding: utf-8 -- from plone.directives import form from five import grok from zope import schema from plone.directives import form, dexterity from zope.schema.vocabulary import SimpleVocabulary myVocabulary = SimpleVocabulary.fromItems(( (u"Foo", "id_foó"), (u"Baroo", "id_baroó"))) class IPrueba(form.Schema): tipoMenu = schema.Choice( title=_(u"Tipo de evento"), description=_(u"Marque la opción que aplique o " "seleccione otro si ninguna aplica"), vocabulary=myVocabulary, required=False, )