who can help to develop a script that will lock the google spreadsheet row after a user entered data to the table row.
Case Description:
I have a spreadsheet table. This table is used by many users to enter data. I need to be sure that different uses can not change the data entered by the others. The best way if each row will have a special "lock" button. So when the user entered all the info into table row he can push the "lock" button to prevent data changes by other users. Besides I wish the user can change the data he entered but with some limit of time - for example only for at least 30 minutes he locked the row.
As an admin I wish to be able to change any data in a spreadsheet table.
Thank you for help.
perhaps you can use google spreadsheet's Protected Range feature as the lock. When person A wanna write data, he set the sheet as private, after that, set public. During person A's writing, if person B want to write also, he will meet exception, he can catch it, and wait a moment, later try to write into.
class ContextManagerUpdateSheet(object):
def __init__(self, spread_sheet_id, sheet_id):
self.spread_sheet_id = spread_sheet_id
self.sheet_id = sheet_id
# self.end_row_index = end_row_index
# self.end_column_index = ord(end_column_index) - 65
def __enter__(self):
logger.info("set spreadsheet sheet: {sheet_id} protected.")
self.protected_id = set_protected(self.spread_sheet_id, self.sheet_id)
def __exit__(self, *others):
logger.info("release protected spreadsheet sheet: {sheet_id}.")
delete_protected(self.spread_sheet_id, self.protected_id)
def runner():
with ContextManagerUpdateSheet("{google_spread_url}", 0):
from datetime import datetime
print datetime.now().strftime("%Y-%m-%d %H:%M:%s")
data = [["www", "3333"]]
apped_data("{google_spread_url}", 0, data)
---
#retry(googleapiclient.errors.HttpError, 5, 20, logger=logger)
def set_protected(spreadsheet_id, sheet_id):
logger.info("test")
service = get_service_handler()
requests_list = list()
requests_list.append(__protected_info(sheet_id))
body = {
"requests": requests_list
}
resp = service.spreadsheets().batchUpdate(spreadsheetId=spreadsheet_id,
body=body).execute()
return resp["replies"][0]["addProtectedRange"]["protectedRange"]["protectedRangeId"]
Related
SoI'm using Google Sheets and I have an Activecampaign integration that adds a new row when a new user subscribe. I would like to add, with the user info, the current day - so I can know when people got in my list.
I have 4 tabs. The one that I want the day is called "leadData".
I tried this code, but it's not working:
function onChange(e) {
var sheet = e.source.getSheetByName("leadData")
columnToWatch = 1,
columnToStamp = 7, //change all of these to your needs...1 is column A, 2 is column B, etc
excluded = ["General Info", "Campaigns", "Automations"]; //add names of sheets/tabs to this list. The script will not work on these sheets.
if (e.range.columnStart !== columnToWatch || !e.value || excluded.indexOf(sheet.getName()) > -1) return;
sheet.getRange(e.range.rowStart, columnToStamp)
.setValue(new Date()).setNumberFormat("MM/dd HH:mm");
}
How can I solve it?
Answer:
The source and range fields are not part of the event object for onChange triggers. You must specify the Sheet and rows you want directly.
More Information:
As per the documentation on event objects, the Google Sheets onChange() trigger is an installable trigger and does not have the source nor the range fields in the event object that it gets passed.
Code Modifications:
You need to specify the sheet directly. Change the following line
var sheet = e.source.getSheetByName("leadData");
to:
var sheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("leadData");
and change
sheet.getRange(e.range.rowStart, columnToStamp)
.setValue(new Date()).setNumberFormat("MM/dd HH:mm");
to
sheet.getRange(sheet.getDataRange().getNumRows(), columnToStamp)
.setValue(new Date()).setNumberFormat("MM/dd HH:mm");
References:
Event Objects | Apps Script | Google Developers
Installable Triggers | Apps Script | Google Developers
How to fetch data every two minutes from an URL, tried different methods to achieve this, couldn't succeed.
=if(Minute(Now())=Minute(Now()),
ImportHtml("https://www.nseindia.com/live_market/dynaContent/live_watch/option_chain/optionKeys.jsp?symbolCode=-10006&symbol=NIFTY&symbol=NIFTY&instrument=-&date=-&segmentLink=7&symbolCount=2&segmentLink=17",
"table",1),"")
Tried above formula, still not updating data.
Need help on this.
this could do the trick... put =NOW() in some cell and setup an update rate in settings:
function getData() {
var sheetName = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("<sheet-name>");
var queryString = Math.random();
var cellFunction = '=IMPORTHTML("<url>?","table",<index>)';
sheetName.getRange('<import-cell>').setValue(cellFunction);
}
Replace with the name of your sheet (the tab name, not the name of the file).
Replace with the URL of your web page.
Replace with the table position on the page, e.g. 1.
Replace with the cell where you want to put your import statement, e.g. A1.
Now make a trigger for getData() to run in each minute.
It will write the IMPORTHTML command every time in the cell in every minute.
I have an existing google form and am looking to:
Image 1. of the google form question.
1) Have the response to the question (What is your name) in the form automatically populate (Sheet 1, Column C) on this existing google sheet
Image 2. Where the google form data will have to go
2) The timestamp that gets generated with each google form submission to automatically populate (Sheet 1, Column E) in the YYYY-MM-DD format.
3) While these google form responses will be recorded in this spreadsheet there will be times when I will have to manually go in and enter information in subsequent rows as well.
Is this possible to do? I am new to bringing in data from google forms into google sheets, can anyone help with the questions above?
Okay. A couple of things.
Go to the Tool menu > Script editor.
Name the script (maybe 'Form Submission'?) by clicking the 'untitled project' text in the top left of the editor.
Replace all text in code.gs with the code below. (Change the code where indicated).
Then go to Edit > Current project's triggers.
Click the link that says: No triggers set up. Click here to add one now.
Under Run, select onSubmit.
Under Events, select on form submit.
Click save.
Now you should go back to the editor and push the play button. This will run the function and initiate the authorisation process. Click through the prompts and accept.
Now, every time a form is submitted, the name and timestamp will be copied over.
function onSubmit() {
var spreadsheet = SpreadsheetApp.getActive();
var responseSheet = spreadsheet.getSheetByName('Form Responses 1');
var copyToSheet = spreadsheet.getSheetByName('Target');
var rLastRow = responseSheet.getLastRow();
var tLastRow = copyToSheet.getLastRow() + 1;
var lastCol = responseSheet.getLastColumn();
var values = responseSheet
.getRange(rLastRow, 1, 1, lastCol)
.getValues()[0];
var timestamp = Utilities.formatDate(new Date(values[0]), Session.getScriptTimeZone(), 'yyyy-MM-dd');
var name = values[1];
copyToSheet.getRange('C' + tLastRow).setValue(name);
copyToSheet.getRange('E' + tLastRow).setValue(timestamp).setNumberFormat('yyyy-MM-dd');
}
I am new to Google Sheets, and I have a Google Sheet that I have set up to dynamically place the present date in cell A1 and the time in cell A2. The sheet is "published to the web", and "Settings/Calculation" is set to Recalculate change every minute.
That all works fine, but I want to be able to read these values from the sheet using an API call. Also works perfectly, the FIRST TIME. Unfortunately, every time I try to call it again, I get the same answer as the first time, even a day later.
I'm using:
=int(hour(now()))&":"&int(minute(now()))&" "&int(SECOND(now()))
as the formula. I should also add that it's a JSON file that I'm reading and it is updating properly on the actual sheet.
I'm sure that I am missing something. Can someone please tell me what it is?
Thanks in advance.
may be you are not reading the JSON correctly. This give me correct result every time I run it.
function myFunction(){
var url = "https://spreadsheets.google.com/feeds/cells/1TtXe1JXKsxHKUWb3bqniHkLQB0Po1fSUqsiib2yMv90/1/public/values?alt=json";
try{
var sh = SpreadsheetApp.getActive().getSheetByName("Sheet1");
var response = UrlFetchApp.fetch(url)
var str = response.getContentText();
var data = JSON.parse(response);
var entry = data.feed.entry;
sh.getRange(1, 1).setValue(entry[0].content.$t);
sh.getRange(1, 2).setValue(entry[1].content.$t);
}catch(e){
Logger.log(e);
}
}
You need to check the size of "entry" before reading it, I just wanted to show that it works.
Thanks
I want to use amazon Dynamo DB with rails.But I have not found a way to implement pagination.
I will use AWS::Record::HashModel as ORM.
This ORM supports limits like this:
People.limit(10).each {|person| ... }
But I could not figured out how to implement following MySql query in Dynamo DB.
SELECT *
FROM `People`
LIMIT 1 , 30
You issue queries using LIMIT. If the subset returned does not contain the full table, a LastEvaluatedKey value is returned. You use this value as the ExclusiveStartKey in the next query. And so on...
From the DynamoDB Developer Guide.
You can provide 'page-size' in you query to set the result set size.
The response of DynamoDB contains 'LastEvaluatedKey' which will indicate the last key as per the page size. If response does't contain 'LastEvaluatedKey' it means there are no results left to fetch.
Use the 'LastEvaluatedKey' as 'ExclusiveStartKey' while fetching next time.
I hope this helps.
DynamoDB Pagination
Here's a simple copy-paste-run proof of concept (Node.js) for stateless forward/reverse navigation with dynamodb. In summary; each response includes the navigation history, allowing user to explicitly and consistently request either the next or previous page (while next/prev params exist):
GET /accounts -> first page
GET /accounts?next=A3r0ijKJ8 -> next page
GET /accounts?prev=R4tY69kUI -> previous page
Considerations:
If your ids are large and/or users might do a lot of navigation, then the potential size of the next/prev params might become too large.
Yes you do have to store the entire reverse path - if you only store the previous page marker (per some other answers) you will only be able to go back one page.
It won't handle changing pageSize midway, consider baking pageSize into the next/prev value.
base64 encode the next/prev values, and you could also encrypt.
Scans are inefficient, while this suited my current requirement it won't suit all!
// demo.js
const mockTable = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]
const getPagedItems = (pageSize = 5, cursor = {}) => {
// Parse cursor
const keys = cursor.next || cursor.prev || [] // fwd first
let key = keys[keys.length-1] || null // eg ddb's PK
// Mock query (mimic dynamodb response)
const Items = mockTable.slice(parseInt(key) || 0, pageSize+key)
const LastEvaluatedKey = Items[Items.length-1] < mockTable.length
? Items[Items.length-1] : null
// Build response
const res = {items:Items}
if (keys.length > 0) // add reverse nav keys (if any)
res.prev = keys.slice(0, keys.length-1)
if (LastEvaluatedKey) // add forward nav keys (if any)
res.next = [...keys, LastEvaluatedKey]
return res
}
// Run test ------------------------------------
const runTest = () => {
const PAGE_SIZE = 6
let x = {}, i = 0
// Page to end
while (i == 0 || x.next) {
x = getPagedItems(PAGE_SIZE, {next:x.next})
console.log(`Page ${++i}: `, x.items)
}
// Page back to start
while (x.prev) {
x = getPagedItems(PAGE_SIZE, {prev:x.prev})
console.log(`Page ${--i}: `, x.items)
}
}
runTest()
I faced a similar problem.
The generic pagination approach is, use "start index" or "start page" and the "page length".
The "ExclusiveStartKey" and "LastEvaluatedKey" based approach is very DynamoDB specific.
I feel this DynamoDB specific implementation of pagination should be hidden from the API client/UI.
Also in case, the application is serverless, using service like Lambda, it will be not be possible to maintain the state on the server. The other side is the client implementation will become very complex.
I came with a different approach, which I think is generic ( and not specific to DynamoDB)
When the API client specifies the start index, fetch all the keys from
the table and store it into an array.
Find out the key for the start index from the array, which is
specified by the client.
Make use of the ExclusiveStartKey and fetch the number of records, as
specified in the page length.
If the start index parameter is not present, the above steps are not
needed, we don't need to specify the ExclusiveStartKey in the scan
operation.
This solution has some drawbacks -
We will need to fetch all the keys when the user needs pagination with
start index.
We will need additional memory to store the Ids and the indexes.
Additional database scan operations ( one or multiple to fetch the
keys )
But I feel this will be very easy approach for the clients, which are using our APIs. The backward scan will work seamlessly. If the user wants to see "nth" page, this will be possible.
In fact I faced the same problem and I noticed that LastEvaluatedKey and ExclusiveStartKey are not working well especially when using Scan So I solved Like this.
GET/?page_no=1&page_size=10 =====> first page
response will contain count of records and first 10 records
retry and increase number of page until all record come.
Code is below
PS: I am using python
first_index = ((page_no-1)*page_size)
second_index = (page_no*page_size)
if (second_index > len(response['Items'])):
second_index = len(response['Items'])
return {
'statusCode': 200,
'count': response['Count'],
'response': response['Items'][first_index:second_index]
}