The controller display the data from the excel sheet.
I need that the controller check the excel sheet every 1 hour, also the views should be updated.
This is my controller code:
string path3 = "D:/Project/Sesame Incident Dump_20160317.xls";
Excel.Application application3 = new Excel.Application();
Excel.Workbook workbook3 = application3.Workbooks.Open(path3);
Excel.Worksheet worksheet3 = workbook3.ActiveSheet;
Excel.Range range3 = worksheet3.UsedRange;
List<SesameIncident> ListSesameIncident = new List<SesameIncident>();
for (int row = 2; row <= range3.Rows.Count; row++)
{
SesameIncident S = new SesameIncident();
S.Number = (((Excel.Range)range3.Cells[row, 1]).Text);
S.AssignedTo = (((Excel.Range)range3.Cells[row, 5]).Text);
S.Opened = (((Excel.Range)range3.Cells[row, 6]).Text);
S.Status = (((Excel.Range)range3.Cells[row, 7]).Text);
S.Priority = (((Excel.Range)range3.Cells[row, 10]).Text);
S.AssignedGroup = (((Excel.Range)range3.Cells[row, 12]).Text);
ListSesameIncident.Add(S);
}
ViewBag.ListSesameIncidents = ListSesameIncident
.Where(x => x.Status == "Pending Customer").Take(13);
You can add a Header to your HttpContext.Response in your controller
HttpContext.Response.Headers.Add("refresh", "300; url=" + Url.Action("Index"));
<script type="text/javascript">
setTimeout(function () {
location.reload();
}, 5 * 60 * 1000);
</script>
refer Refresh Page for interval using js
You can refresh the page this way
for the controller you might need a table in database to refer, when was last updated, for reference you will have to store reference data permanently , this is my opinion, I never had such requirement
To run something periodically without user interaction (that is, without a request to initiate it), a web application isn't what you want. Instead, you're looking for either a Windows Service or perhaps a simple Console Application scheduled to run at regular intervals by the host system's scheduling software (Windows Task Scheduler, cron, etc.). See How to execute a method in Asp.net MVC for every 24 hours
I would rather think about caching that could potentially save reading xls every time. See How to cache data in a MVC application
To update the client every X second is quite simple. Just use a
meta
http-equiv
With the value refresh in you page's Header.
This solution is clean and easy to read and you will not be depending of a simple JavaScript loop.
To update your excel sheet every X, you need another app with a
Timer. You can do whatever you want, if you're using .net, a simple console application will do the work. If you are using Azure you could just use a worker role, that is exactly what a worker
Is about ;p
Related
Is it even possible to have a slide presentation delete a slide base off of a date. For example have the slide expire after a certain date. The purpose is for digital signage. I was just hoping to write a script that deleted a slide.
Please let me know if my question is not clear.
first post #not a programmer...yet
I think you want to do a cron. Cron is time-based job scheduler, to run periodically at fixed times, dates, or intervals.
One sample would be using a time-driven trigger to run a function (delete slide).
The code snippet that I used would be in Apps Script to run using a time driven trigger and code to delete a page in a slide.
function checkslide(){
var origin = Slides.Presentations.get(originSlideID).slides;
Logger.log(origin[1])
var slide = SlidesApp.openById(targetSlideID).getSlides();
//remove all slide other than first page
slide.splice(targetSlideID - 1, 1);
for (var i in slide) {
slide[i].remove();
}
//remove a specify depending on condition where j is the slide to be removed
for (i = 0; i < slide.length; i++) {
if(i == j){
slide[i].remove();
}
}
}
References:
https://developers.google.com/slides/samples/writing#delete_a_page_or_page_element
https://developers.google.com/apps-script/guides/triggers/installable
So this is what I ended up with to get realtime starring/liking (of communities, in my case) working, with a Firebase datastore. It's a mess and surely I'm missing some fundamentals.
Here my element gets communities, each as a Map community stored in an observed List communities. It has to rewrite that List several times as it changes each community Map based on the results of the changed star count and the user's starred state, and some other fun:
getCommunities() {
// Since we call this method a second time after user
// signed in, clear the communities list before we recreate it.
if (communities.length > 0) { communities.clear(); }
var firebaseRoot = new db.Firebase(firebaseLocation);
var communityRef = firebaseRoot.child('/communities');
// TODO: Undo the limit of 20; https://github.com/firebase/firebase-dart/issues/8
communityRef.limit(20).onChildAdded.listen((e) {
var community = e.snapshot.val();
// snapshot.name is Firebase's ID, i.e. "the name of the Firebase location",
// so we'll add that to our local item list.
community['id'] = e.snapshot.name();
print(community['id']);
// If the user is signed in, see if they've starred this community.
if (app.user != null) {
firebaseRoot.child('/users/' + app.user.username + '/communities/' + community['id']).onValue.listen((e) {
if (e.snapshot.val() == null) {
community['userStarred'] = false;
// TODO: Add community star_count?!
} else {
community['userStarred'] = true;
}
print("${community['userStarred']}, star count: ${community['star_count']}");
// Replace the community in the observed list w/ our updated copy.
communities
..removeWhere((oldItem) => oldItem['alias'] == community['alias'])
..add(community)
..sort((m1, m2) => m1["updatedDate"].compareTo(m2["updatedDate"]));
communities = toObservable(communities.reversed.toList());
});
}
// If no updated date, use the created date.
if (community['updatedDate'] == null) {
community['updatedDate'] = community['createdDate'];
}
// Handle the case where no star count yet.
if (community['star_count'] == null) {
community['star_count'] = 0;
}
// The live-date-time element needs parsed dates.
community['updatedDate'] = DateTime.parse(community['updatedDate']);
community['createdDate'] = DateTime.parse(community['createdDate']);
// Listen for realtime changes to the star count.
communityRef.child(community['alias'] + '/star_count').onValue.listen((e) {
int newCount = e.snapshot.val();
community['star_count'] = newCount;
// Replace the community in the observed list w/ our updated copy.
// TODO: Re-writing the list each time is ridiculous!
communities
..removeWhere((oldItem) => oldItem['alias'] == community['alias'])
..add(community)
..sort((m1, m2) => m1["updatedDate"].compareTo(m2["updatedDate"]));
communities = toObservable(communities.reversed.toList());
});
// Insert each new community into the list.
communities.add(community);
// Sort the list by the item's updatedDate, then reverse it.
communities.sort((m1, m2) => m1["updatedDate"].compareTo(m2["updatedDate"]));
communities = toObservable(communities.reversed.toList());
});
}
Here we toggle the star, which again replaces the observed communities List a few times as we update the count in the affected community Maps and thus rewrite the List to reflect that:
toggleStar(Event e, var detail, Element target) {
// Don't fire the core-item's on-click, just the icon's.
e.stopPropagation();
if (app.user == null) {
app.showMessage("Kindly sign in first.", "important");
return;
}
bool isStarred = (target.classes.contains("selected"));
var community = communities.firstWhere((i) => i['id'] == target.dataset['id']);
var firebaseRoot = new db.Firebase(firebaseLocation);
var starredCommunityRef = firebaseRoot.child('/users/' + app.user.username + '/communities/' + community['id']);
var communityRef = firebaseRoot.child('/communities/' + community['id']);
if (isStarred) {
// If it's starred, time to unstar it.
community['userStarred'] = false;
starredCommunityRef.remove();
// Update the star count.
communityRef.child('/star_count').transaction((currentCount) {
if (currentCount == null || currentCount == 0) {
community['star_count'] = 0;
return 0;
} else {
community['star_count'] = currentCount - 1;
return currentCount - 1;
}
});
// Update the list of users who starred.
communityRef.child('/star_users/' + app.user.username).remove();
} else {
// If it's not starred, time to star it.
community['userStarred'] = true;
starredCommunityRef.set(true);
// Update the star count.
communityRef.child('/star_count').transaction((currentCount) {
if (currentCount == null || currentCount == 0) {
community['star_count'] = 1;
return 1;
} else {
community['star_count'] = currentCount + 1;
return currentCount + 1;
}
});
// Update the list of users who starred.
communityRef.child('/star_users/' + app.user.username).set(true);
}
// Replace the community in the observed list w/ our updated copy.
communities.removeWhere((oldItem) => oldItem['alias'] == community['alias']);
communities.add(community);
communities.sort((m1, m2) => m1["updatedDate"].compareTo(m2["updatedDate"]));
communities = toObservable(communities.reversed.toList());
print(communities);
}
There's also some other craziness where we have to get the list of communities again when app.changes because we only load app.user after the app and list initially load, and now that we have the user we need to turn on the appropriate stars. So my attached() looks like:
attached() {
app.pageTitle = "Communities";
getCommunities();
app.changes.listen((List<ChangeRecord> records) {
if (app.user != null) {
getCommunities();
}
});
}
There, it seems I could just be getting the stars and updating said each affected community Map, then repopulating the observed communities List, but that's the least of it.
The full thing: https://gist.github.com/DaveNotik/5ccdc9e74429cf87d641
How can I improve all this Map/List management, e.g. where every time I change a community Map, I have to rewrite the whole communities List? Should I be thinking of it differently?
What about all this querying Firebase? Surely, there's a better way, but it seems I need to do a lot to keep it realtime, and also the element gets attached and detached, so it seems I need to run getCommunities() each time. Unless the OOP way is objects get created, and they're always there to be observed whenever the element is attached? I'm missing those fundamentals.
This app.changes business to handle the case where we load the list before we have the app.user (which then means we want to load her stars) - is there a better way?
Other ridiculousness?
Big question, I know. Thank you for helping me get a handle on the right approach as I move forward!
I think there is two different ways to choose, if you want to keep a data of your application in real time sync with server database:
1 Polling (pull method ie. a client pulls the data from server)
Application polls ie. requests the updated data from the server. Polling can be automatic (for example with interval of 60s) or requested by user (= refresh). The short automatic interval will cause high load on server and with long interval you lose real time feeling.
2 Full-duplex (push method ie. server can push the data to the client)
An application and a server have full-duplex connection in between and server is able to send the data or a notification of the data available to the client. Then the client can decide whether or not to retrieve the data.
This method is a modern one, because it'll keep the net traffic and the server load in minimum and yet providing a real time updates.
The firebase boasts with this kind of updates, but I'm not sure is it full-dublex or just a clever way of polling. Websocket protocol is a real full-duplex connection and dart server supports it.
The updated data from a server can include:
1 A full dataset
Basically the server sends a full dataset (=initial query) and the server doesn't "know" anything about updated data. This is easiest way to go, if you have reasonable small datasets. Many times you'll have a very small datasets among the big ones, so this way can be useful.
2 A dataset including a new data only
The server can send a dataset based on modified timestamp ie. every time a record in the database changes, a timestamp for update will be saved and the query can be filtered based on this timestamp. In other words application knows when it has last updated the data and then requests newer data.
3 A changed record
A server keeps a track of updated data and sends it to the application. The data can be sent record by record when changes occurs or server can collect the data for a bigger chunks to be sent. This method requires a server to keep a track of every client connected in order to send a correct data to each client. When you add an authentication process for clients ie. not every data can be send to all, it can get quite complicated.
I think the easiest way is to use the method number 2 for updated data.
Last thing...
What to do with the data received?
1 Handle everything as a new
If application receives an updated data, it will destroy/clear all the lists and maps and recreate/refill them with the new data. Typical problems with this are a user loses a current position on a page or the data user were looking jumps around. If application has modified or extended an old data for some reason, all those modifications will be lost. This method works ok, if a user requests a refresh.
2 Update only the changed data
The application never clears initial list or maps, it just updates them with a new received data. Typically you will construct a new combined map from queried data for your specific need (for example a certain view). The combined map has already all information you want to show in the specific view (default values even if the initial queries didn't had the data for the field) and you just update a new values in it.
If the updated information needs a new member in the list you just add it in the end.
If the updated information requires a deletion from the list, it might be a good thing to use extra field "active" and filter the list/map with it. With filtering you won't lose any referencies or so.
If you need to sort a data or filter it, it should be done by a view or user request. Basically the data is stored in the application and updated as needed. When a user needs to see the data in a specific way, the view should show the data a proper way. This is called model-controller-view and the main idea is to separate the data from the view.
I'm sorry this long answer didn't answer any of your questions, but I tried to cut this challenge to a smaller chunks. Many times you can see an interface between these chunks and you can design and organize your code nicely by using these interfaces.
I want to update a task item programatically in CSOM. Item is updating but workflow is not triggering. I need just to open the item in sharepoint and save it. Then workflow is triggering.
List requestTasksList = MyWeb.Lists.GetByTitle("TestRequest Tasks");
List<TestRequestModel> testRequestList = new List<TestRequestModel>();
ListItemCollection ColListItems = requestTasksList.GetItems(Spqur);
ctx.Load(ColListItems);
ctx.ExecuteQuery();
foreach (ListItem task in ColListItems)
{
task["Status"] = "Completed";
task["TaskOutcome"] = "Approved";
task["PercentComplete"] = 1.0;
task["Checkmark"] = 1;
task.Update();
requestTasksList.Update();
}
ctx.ExecuteQuery();
This is the updated task item
As i said, When i click to save button, workflow is triggering and new task is creating.
I'm not sure if its typo but it should be
List requestTasksList = MyWeb.Lists.GetByTitle("TestRequest Tasks");
List<TestRequestModel> testRequestList = new List<TestRequestModel>();
ListItemCollection ColListItems = requestTasksList.GetItems(Spqur);
foreach (ListItem task in ColListItems)
{
task["Status"] = "Completed";
task["TaskOutcome"] = "Approved";
task["PercentComplete"] = 1.0;
task["Checkmark"] = 1;
task.Update();
}
ctx.ExecuteQuery();
We needed to do the same thing and have found that there are no event handlers on the workflow tasks list in SharePoint 2013. I know that there is a SPWorkflowAutostartEventReceiver on lists that have workflows auto start on add or update, so I assumed this same approach would be done for workflow tasks as well, but it is not. Since there are no event handlers on the workflow tasks list, I surmise that all workflow triggers are initiated from the server-side UI code on the task list (horrible design).
For us we need to work completely client side with no farm solution or sandboxed code. So our only solution has been to screen scrape URLs and then open pages or dialogs for the user to do things like cancel all tasks for an approval workflow. Granted, this approach still does requires user input. I suppose you could screen scrape the whole page and play back the action of hitting buttons on a task page or cancel task page if you needed to avoid user input. That would be a pain, though.
I have a job scheduled in Application_start event using quartz.net, the trigger is fired every 1 min given by the variable repeatDurationTestData = "0 0/1 * * * ?";
The triggering starts when I first open the site, But stops after some random time when I close the browser and starts again when I open the site. Following is the code
IMyJob testData = new SynchronizeTestData();
IJobDetail jobTestData = new JobDetailImpl("Job", "Group", testData.GetType());
ICronTrigger triggerTestData = new CronTriggerImpl("Trigger", "Group", repeatDurationTestData);
_scheduler.ScheduleJob(jobTestData, triggerTestData);
DateTimeOffset? nextFireTime = triggerTestData.GetNextFireTimeUtc();
What Am i doing wrong here, Is this because of some misfire. Please suggest.
Thanks
At First I would use a simple trigger in this case as it takes a repeat interval and seems to fit better than the cron trigger would (from lesson 5 quartz.net website) :
SimpleTrigger trigger2 = new SimpleTrigger("myTrigger",
null,
DateTime.UtcNow,
null,
SimpleTrigger.RepeatIndefinitely,
TimeSpan.FromSeconds(60));
I would also recommend you don't put the quartz scheduler within the website. the main purpose of a job system is to work independently of anyother system so it generally fits naturally into a windows service. By putting it as part of the website you arn't guaranteed its going to keep going. If you loose the app pool or it restarts, you wont get a reliable result.
There is an example with the quartz.net download.
hope that helps.
I want to use amazon Dynamo DB with rails.But I have not found a way to implement pagination.
I will use AWS::Record::HashModel as ORM.
This ORM supports limits like this:
People.limit(10).each {|person| ... }
But I could not figured out how to implement following MySql query in Dynamo DB.
SELECT *
FROM `People`
LIMIT 1 , 30
You issue queries using LIMIT. If the subset returned does not contain the full table, a LastEvaluatedKey value is returned. You use this value as the ExclusiveStartKey in the next query. And so on...
From the DynamoDB Developer Guide.
You can provide 'page-size' in you query to set the result set size.
The response of DynamoDB contains 'LastEvaluatedKey' which will indicate the last key as per the page size. If response does't contain 'LastEvaluatedKey' it means there are no results left to fetch.
Use the 'LastEvaluatedKey' as 'ExclusiveStartKey' while fetching next time.
I hope this helps.
DynamoDB Pagination
Here's a simple copy-paste-run proof of concept (Node.js) for stateless forward/reverse navigation with dynamodb. In summary; each response includes the navigation history, allowing user to explicitly and consistently request either the next or previous page (while next/prev params exist):
GET /accounts -> first page
GET /accounts?next=A3r0ijKJ8 -> next page
GET /accounts?prev=R4tY69kUI -> previous page
Considerations:
If your ids are large and/or users might do a lot of navigation, then the potential size of the next/prev params might become too large.
Yes you do have to store the entire reverse path - if you only store the previous page marker (per some other answers) you will only be able to go back one page.
It won't handle changing pageSize midway, consider baking pageSize into the next/prev value.
base64 encode the next/prev values, and you could also encrypt.
Scans are inefficient, while this suited my current requirement it won't suit all!
// demo.js
const mockTable = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]
const getPagedItems = (pageSize = 5, cursor = {}) => {
// Parse cursor
const keys = cursor.next || cursor.prev || [] // fwd first
let key = keys[keys.length-1] || null // eg ddb's PK
// Mock query (mimic dynamodb response)
const Items = mockTable.slice(parseInt(key) || 0, pageSize+key)
const LastEvaluatedKey = Items[Items.length-1] < mockTable.length
? Items[Items.length-1] : null
// Build response
const res = {items:Items}
if (keys.length > 0) // add reverse nav keys (if any)
res.prev = keys.slice(0, keys.length-1)
if (LastEvaluatedKey) // add forward nav keys (if any)
res.next = [...keys, LastEvaluatedKey]
return res
}
// Run test ------------------------------------
const runTest = () => {
const PAGE_SIZE = 6
let x = {}, i = 0
// Page to end
while (i == 0 || x.next) {
x = getPagedItems(PAGE_SIZE, {next:x.next})
console.log(`Page ${++i}: `, x.items)
}
// Page back to start
while (x.prev) {
x = getPagedItems(PAGE_SIZE, {prev:x.prev})
console.log(`Page ${--i}: `, x.items)
}
}
runTest()
I faced a similar problem.
The generic pagination approach is, use "start index" or "start page" and the "page length".
The "ExclusiveStartKey" and "LastEvaluatedKey" based approach is very DynamoDB specific.
I feel this DynamoDB specific implementation of pagination should be hidden from the API client/UI.
Also in case, the application is serverless, using service like Lambda, it will be not be possible to maintain the state on the server. The other side is the client implementation will become very complex.
I came with a different approach, which I think is generic ( and not specific to DynamoDB)
When the API client specifies the start index, fetch all the keys from
the table and store it into an array.
Find out the key for the start index from the array, which is
specified by the client.
Make use of the ExclusiveStartKey and fetch the number of records, as
specified in the page length.
If the start index parameter is not present, the above steps are not
needed, we don't need to specify the ExclusiveStartKey in the scan
operation.
This solution has some drawbacks -
We will need to fetch all the keys when the user needs pagination with
start index.
We will need additional memory to store the Ids and the indexes.
Additional database scan operations ( one or multiple to fetch the
keys )
But I feel this will be very easy approach for the clients, which are using our APIs. The backward scan will work seamlessly. If the user wants to see "nth" page, this will be possible.
In fact I faced the same problem and I noticed that LastEvaluatedKey and ExclusiveStartKey are not working well especially when using Scan So I solved Like this.
GET/?page_no=1&page_size=10 =====> first page
response will contain count of records and first 10 records
retry and increase number of page until all record come.
Code is below
PS: I am using python
first_index = ((page_no-1)*page_size)
second_index = (page_no*page_size)
if (second_index > len(response['Items'])):
second_index = len(response['Items'])
return {
'statusCode': 200,
'count': response['Count'],
'response': response['Items'][first_index:second_index]
}