I have an exported product list from Shopify. I need to translate the product description but in the exported description there are HTML codes that Have to stay intact. How do I exclude the code from my translation?
I use =googletranslate
What can i add in my function to exclude all text between <> theese
You can strip tags with this function
function stripTags(body) {
var regex = /(<([^>]+)>)/ig;
return body.replace(regex,"");
}
use:
=REGEXREPLACE(A1; "</?\S+[^<>]*>"; )
or:
=REGEXREPLACE(A1; "<\/\w+>|<\w+.*?>"; )
Related
I want to get data from the text found in 2.. In 1. Is the data I have available and in 3. You will find the wanted result.
I have the following information available
Categories
Kleur
Soorthek
Bevestigingswijze
I am getting this text from scraping: BevestigingswijzeKlembevestigingKleurWitSoorthekSpijlenhek
I want this as a result by using a function in Google Sheets.
Wanted Result
KleurWit
SoorthekSpijlenhek
BevestigingswijzeKlembevestiging
Thank you in advance!
You could accomplish this with an Apps Script custom function. To achieve this, follow these steps:
In your spreadsheet, select Tools > Script editor to open a script bound to your file.
Copy this function in the script editor, and save the project:
function splitScrape(categories, scrape) {
const indexes = categories.map(c => scrape.indexOf(c[0])).sort();
const split = indexes.map((index, j) => scrape.slice(index, indexes[j+1]));
return split;
}
The sample above won't detect multiple occurrences of the same category in the scrape string, and it will only handle the first one (if a second parameter is not provided, indexOf only detects the first occurrence). In order to detect multiple occurrences, I'd suggest replacing the function with this one:
function splitScrape(categories, scrape) {
const indexes = [];
categories.flat().filter(String).forEach(c => {
let index = scrape.indexOf(c);
while (index > -1) {
indexes.push(index);
index = scrape.indexOf(c, index + 1);
}
});
indexes.sort((a, b) => a-b);
const split = indexes.map((index, j) => scrape.slice(index, indexes[j+1])).filter(String);
return split;
}
Now, if you go back to your spreadsheet, you can use this function like any in-built one. You just have to provide the appropriate ranges where the Categories and the scrape string are located, as you can see here:
Reference:
Custom Functions in Google Sheets
I was wondering if its possible to download say only sheet 1 of a google spreadsheet as excel? I have seen few SO posts that show the method to export the WHOLE sheet as excel, but I need to just export one sheet. Is it at all possible? and if yes, how?
You can download a specific sheet using the 'GID'.
Each sheet has a GID, you can find GID of specific sheet in the URL of
spreadsheet. Then you can use this link to download specific sheet -
https://docs.google.com/spreadsheets/d/<KEY>/export?format=xlsx&gid=<GID>
ex:
https://docs.google.com/spreadsheets/d/1D5vzPaOJOx402RAEF41235qQTOs28_M51ee5glzPzj0/export?format=xlsx&gid=1990092150
KEY is the unique ID of the spreadsheet.
source: https://www.quora.com/How-do-I-download-just-one-sheet-from-google-spreadsheet/answer/Ranjith-Kumar-339?srid=2YCg
From what I've found, the other two answers on this post are exactly correct, all you need to do is replace this:
/edit#gid=
with:
/export?format=xlsx&gid=
This works just fine although I did find that I had to keep looking up this string and copying it. Instead, I made a quick Javascript snippet that does all the work for you:
Just run the code snippet below and drag the link it creates into your bookmarks bar. I know this is a little hacky but for some reason, stackoverflow doesn't want me injecting javascript into the links I provide.
Export Sheet as Excel
I've tested this on the latest versions of Chrome, Safari, and Firefox. They all work although you might have to get a little creative about how you make your bookmarks.
when you see every Google spreadsheet url looks like this
https://docs.google.com/spreadsheets/d/1D5vzPaOJOx402RAEF41235qQTOs28_M51ee5glzPzj0/edit#gid=1078561300
In every spreadsheet URL we can see: /edit#gid=
this is generally the default mode.
/edit#gid=
just replace it with:
/export?format=xlsx&gid=
it will download the single spreadsheet from the workbook
I am able to download all sheets of a spreadsheet.
Just remove anything after
/edit?
and replace with
/export?format=xlsx
for Excel
or
/export?format=pdf
for PDF
Please use any_value() function before the column because field(column) have more than one value for one id(group by).
like-
select any_value(phone_no) from user_details group by user_id.
here one user_id have more than one phone number so query confused which choose.
You can do this by clicking on the down arrow near the sheet name to bring up the options, and then selecting "Copy to -> New spread sheet", then click the "Open spread sheet" in the pop up that comes up after.
You can use my code:
function emailAsExcel() {
var config = {
to: "name#gmail.com",
subject: "your text",
body: "your text"
};
var ui = SpreadsheetApp.getUi();
if (!config || !config.to || !config.subject || !config.body) {
throw new Error('Configure "to", "subject" and "body" in an object as
the first parameter');
};
var spreadsheet = SpreadsheetApp.getActiveSpreadsheet();
var spreadsheetId = spreadsheet.getId();
var file = Drive.Files.get(spreadsheetId);
var url = 'https://docs.google.com/spreadsheets/d/'+spreadsheetId+'/export?
format=xlsx&gid=numberSheetID to email';
var token = ScriptApp.getOAuthToken();
var response = UrlFetchApp.fetch(url, {
headers: {
'Authorization': 'Bearer ' + token
};
});
var fileName = (config.fileName || spreadsheet.getName()) + '.xlsx';
var blobs = [response.getBlob().setName(fileName)];
if (config.zip) {
blobs = [Utilities.zip(blobs).setName(fileName + '.zip')];
}
GmailApp.sendEmail(
config.to,
config.subject,
config.body,
{
attachments: blobs
}
);
}
I have a link by this text,
s= "http://xyz.com/Getid.ashx?JobID=250920&JobTitle=office+junior&seswitch=1&lid=801&AVSDM=2012-11-22+11%3a33%3a00"
I need to extract two information from this link
1)JobID which "250920"
2)JobTitle which is "office junior"
Is this possible by using gsub will I be able to get only those text from the link?
You can try .match with a regex:
result = s.match(/JobID\=(\d+).+JobTitle\=([a-zA-Z+0-9]+)\&/)
result[1] #JobID
result[2] #JobTile
Im using the jQuery mobile search filter list:
http://jquerymobile.com/test/docs/lists/lists-performance.html
Im having somer performance issues, my list is a little slow to filter on some phones. To try and aid performance I want to change the search so only items starting with the search text are returned.
So 'aris' currently finds the result 'paris' but I want this changed. I can see its possible from the documentation below but I dont know how to implement the code.
http://jquerymobile.com/test/docs/lists/docs-lists.html
$("document").ready( function (){
$(".ui-listview").listview('option', 'filterCallback', yourFilterFunction)
});
This seems to demonstrate how you write and call your own function, but ive no idea how to write it! Thanks
http://blog.safaribooksonline.com/2012/02/14/jquery-mobile-tip-write-your-own-list-view-filter-function/
UPDATE - Ive tried the following in a seperate js file:
$("document").ready( function (){
function beginsWith( text, pattern) {
text= text.toLowerCase();
pattern = pattern.toLowerCase();
return pattern == text.substr( 0, pattern.length );
}
$(".ui-listview").listview('option', 'filterCallback', beginsWith)
});
might look something like this:
function beginsWith( text, pattern) {
text= text.toLowerCase();
pattern = pattern.toLowerCase();
return pattern == text.substr( 0, pattern.length );
}
Basically you compare from 0 to "length" of what you're matching to the source. So if you pass in "test","tester" it will see you're passing in a string of length 4 and then substr "tester" from 0,4, which gives you "test". Then "test" is equal to "test"... so return true. Lowercase them to make it case insensitive.
Another trick to improve filter performance, only filter once they've entered more than 1 character.
edit it appears jQueryMobile's filter function expects that "true" means it was not found... so it needs to be backwards. return pattern != text.substr( 0, pattern.length );
This worked for me. I am using regular expression here so sort of different way to achieve the same thing.
But the reason why my code didn't work initially was that the list item had a lot of spaces at the beginning and at the end (found that it got added on it's own while debugging).
So I do a trim on the text before doing the match. I have a feeling Jonathan Rowny's implementation will also work if we do text.trim() before matching.
$(".ui-listview").listview('option', 'filterCallback', function (text, searchValue) {
var matcher = new RegExp("^" + searchValue, "i");
return !matcher.test(text.trim());
});
I want to parse a random website, modify the content so that every word is a link (for a dictionary tooltip) and then display the website in an iframe.
I'm not looking for a complete solution, but for a hint or a possible strategy. The linking is my problem, parsing the website and displaying it in an iframe is quite simple. So basically I have a String with all the html content. I'm not even sure if it's better to do it serverside or after the page is loaded with JS.
I'm working with Ruby on Rails, jQuery, jRails.
Note: The content of the href tag depends on the word.
Clarification:
I tried a regexp and it already kind of works:
#site.gsub!(/[A-Za-z]+(?:['-][A-Za-z]+)?|\\d+(?:[,.]\\d+)?/) {|word| '' + word + ''}
But the problem is to only replace words in the text and leave the HTML as it is. So I guess it is a regex problem...
Thanks for any ideas.
I don't think a regexp is going to work for this - or, at least, it will always be brittle. A better way is to parse the page using Hpricot or Nokogiri, then go through it and modify the nodes that are plain text.
It sounds like you have it mostly planned out already.
Split the content into words and then for each word, create a link, such as whatever
EDIT (based on your comment):
Ahh ... I recommend you search around for screen scraping techniques. Most of them should start with removing anything between < and > characters, and replacing <br> and <p> with newlines.
I would use Nokogiri to remove the HTML structure before you use the regex.
no_html = Nokogiri::HTML(html_as_string).text
Simple. Hash the HTML, run your regex, then unhash the HTML.
<?php
class ht
{
static $hashes = array();
# hashes everything that matches $pattern and saves matches for later unhashing
function hash($text, $pattern) {
return preg_replace_callback($pattern, array(self,'push'), $text);
}
# hashes all html tags and saves them
function hash_html($html) {
return self::hash($html, '`<[^>]+>`');
}
# hashes and saves $value, returns key
function push($value) {
if(is_array($value)) $value = $value[0];
static $i = 0;
$key = "\x05".++$i."\x06";
self::$hashes[$key] = $value;
return $key;
}
# unhashes all saved values found in $text
function unhash($text) {
return str_replace(array_keys(self::$hashes), self::$hashes, $text);
}
function get($key) {
return self::$hashes[$key];
}
function clear() {
self::$hashes = array();
}
}
?>
Example usage:
ht::hash_html($your_html);
// your word->href converter here
ht::unhash($your_formatted_html);
Oh... right, I wrote this in PHP. Guess you'll have to convert it to ruby or js, but the idea is the same.