Looking at the "BiosInitTime" from ETW events parsed using "tracerpt.exe" and I noticed for hibernate/S4 resume it's always 0 (see example at the end). The same happened with etl traces collected directly using XPERF or via ADK Windows Assessment Console. But via WAC/WPA analysis & GUI, the BIOS will be shown. So it appears the information is there but "tracerpt.exe" is parsing the wrong events to calculate "BiosInitTime".
What specific start/stop events I should check to calculate the Bios Init Time via etl trace, say, using xperf?
Thanks
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
<System>
<Provider Name="Microsoft-Windows-Kernel-Power" Guid="{331c3b3a-2005-44c2-ac5e-77220c37d6b4}" />
<EventID>39</EventID>
<Version>0</Version>
<Level>4</Level>
<Task>33</Task>
<Opcode>0</Opcode>
<Keywords>0x400000000000000C</Keywords>
<TimeCreated SystemTime="2016-02-03T15:08:43.601479000Z" />
<Correlation ActivityID="{00000000-0000-0000-0000-000000000000}" />
<Execution ProcessID="4" ThreadID="3140" ProcessorID="0" KernelTime="180" UserTime="0" />
<Channel>Microsoft-Windows-Kernel-Power/Diagnostic</Channel>
<Computer />
</System>
<EventData>
<Data Name="SleepTime"> 1546</Data>
<Data Name="ResumeTime"> 769</Data>
<Data Name="DriverWakeTime"> 715</Data>
<Data Name="HiberWriteTime"> 2999</Data>
<Data Name="HiberReadTime"> 1862</Data>
<Data Name="HiberPagesWritten"> 148964</Data>
**<Data Name="BiosInitTime"> 0</Data>**
</EventData>
<RenderingInfo Culture="en-US">
<Level>Information </Level>
<Opcode>Info </Opcode>
<Keywords>
<Keyword>po:Diagnostic</Keyword>
<Keyword>po:Performance</Keyword>
</Keywords>
<Task>PowerTransition</Task>
<Channel>Microsoft-Windows-Kernel-Power/Diagnostic</Channel>
<Provider>Microsoft-Windows-Kernel-Power </Provider>
</RenderingInfo>
The Microsoft-Windows-Kernel-Power events are not captured into the ETL when selecting hibernation in WPRUI.exe. You can see this if you open the ETL with PerfView and look in the raw event list.
So when you try to search for the BiosInitTime it shows as 0. If you can see if when running the ADK Windows Assessment Console, this means here the Microsoft-Windows-Kernel-Power events are captured.
Related
I have two files that I want to load by using g.io(<name file>).read().iterate(): nodes.xml and edges.xml.
The nodes.xml file contains the nodes of the graph I want to upload, and its contents are this:
<?xml version='1.0' encoding='utf-8'?>
<graphml xmlns="http://graphml.graphdrawing.org/xmlns" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://graphml.graphdrawing.org/xmlns http://graphml.graphdrawing.org/xmlns/1.0/graphml.xsd">
<key id="labelV" for="node" attr.name="labelV" attr.type="string" />
<key id="name" for="node" attr.name="name" attr.type="string" />
<key id="age" for="node" attr.name="age" attr.type="int" />
<graph id="G" edgedefault="directed">
<node id="1">
<data key="labelV">person</data>
<data key="name">marko</data>
<data key="age">29</data>
</node>
<node id="2">
<data key="labelV">person</data>
<data key="name">vadas</data>
<data key="age">27</data>
</node>
</graph>
</graphml>
The edges.xml file contains the edges of the graph I want to upload, and its content are this:
<?xml version='1.0' encoding='utf-8'?>
<graphml xmlns="http://graphml.graphdrawing.org/xmlns" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://graphml.graphdrawing.org/xmlns http://graphml.graphdrawing.org/xmlns/1.0/graphml.xsd">
<key id="labelE" for="edge" attr.name="labelE" attr.type="string" />
<key id="weight" for="edge" attr.name="weight" attr.type="double" />
<graph id="G" edgedefault="directed">
<edge id="7" source="1" target="2">
<data key="labelE">knows</data>
<data key="weight">0.5</data>
</edge>
</graph>
</graphml>
I want to upload the nodes first by running g.io('nodes.xml').read().iterate() and then the edges by running g.io('edges.xml').read().iterate(). But when I upload the edges.xml, instead of adding edges to the previously created nodes, it creates new nodes.
It is possible to easily load the nodes first and then the edges in separate queries with a similar command in Gremlin? I know this can be accomplished with complex queries that involve reading and creating edge by edge the edges in the edges.xml file via user queries, but I'm wondering if there is something easier. Also, I wouldn't want to upload a single file with all the nodes and edges.
I'm afraid that the GraphMLReader doesn't work that way. It's not designed to read into an existing graph. I honestly can't remember if this was done purposefully or not.
The code isn't too complicated though. You could probably just modify it to work they way that you want. You can see here where the code checks the vertex cache for the id. That cache is empty on your second execution because it is only filled by way of new vertex additions - it doesn't remember any from your first run and it doesn't read from the graph directly for your second run. Simply change that to logic to better suit your needs.
I am using MonkehTweet Coldfusion wrapper for Twitter Authentication. I have everything up working, but i cannot get my head around posting multiple images using the PostUpdateWithMedia function. I am relatively new to coldfusion, and learning it along the way. A simple call to PostUpdateWithMedia(status="", media="") would post to Twitter with an image, but how can i use this to post multiple images. The PostUpdateWithMedia function from MonkehTweet is,
<cffunction name="postUpdateWithMedia" access="public" output="false" hint="Updates the authenticating user's status. Request must be a POST. A status update with text identical to the authenticating user's current status will be ignored to prevent duplicates.">
<cfargument name="status" required="true" type="String" hint="The text of your status update. URL encode as necessary. Statuses over 140 characters will be forceably truncated." />
<cfargument name="media" required="true" type="string" hint="Up to max_media_per_upload files may be specified in the request, each named media[]. Supported image formats are PNG, JPG and GIF. Animated GIFs are not supported." />
<cfargument name="possibly_sensitive" required="false" type="boolean" default="false" hint="Set to true for content which may not be suitable for every audience." />
<cfargument name="in_reply_to_status_id" required="false" type="String" hint="The ID of an existing status that the update is in reply to." />
<cfargument name="lat" required="false" type="String" hint="The location's latitude that this tweet refers to." />
<cfargument name="long" required="false" type="String" hint="The location's longitude that this tweet refers to." />
<cfargument name="place_id" required="false" type="String" hint="A place in the world. These IDs can be retrieved from geo/reverse_geocode." />
<cfargument name="display_coordinates" required="false" type="String" hint="Whether or not to put a pin on the exact coordinates a tweet has been sent from." />
<cfargument name="checkHeader" required="false" type="boolean" default="false" hint="If set to true, I will abort the request and return the response headers for debugging." />
<cfargument name="timeout" required="false" type="string" default="#variables.instance.timeout#" hint="An optional timeout value, in seconds, that is the maximum time the cfhttp requests can take. If the time-out passes without a response, ColdFusion considers the request to have failed." />
<cfset var strTwitterMethod = '' />
<cfset arguments["media[]"] = arguments.media />
<cfset structDelete(arguments,'media') />
<cfset strTwitterMethod = getCorrectEndpoint('api') & 'statuses/update_with_media.json' />
<cfreturn genericAuthenticationMethod(timeout=getTimeout(), httpURL=strTwitterMethod,httpMethod='POST', parameters=arguments, checkHeader=arguments.checkHeader) />
</cffunction>
I have tried passing in multiple files as,
PostUpdateWithMedia(status="", media="", media=""); but it didnot work. But, I am passing in the multiple media arguments wrong. Can someone help me with how to pass in multiple media arguments.
Unfortunately, MonkehTweet does not include support for the Twitter API media/upload endpoint, which is required for multi-image Tweets - it only supports the deprecated statuses/update_with_media, which still works, but only supports a single image. I also noticed that the code still refers to 140 character Tweets, so it probably has not been updated for some time. I'm not aware of an alternative ColdFusion wrapper for the API, either, so this is something you would need to build for yourself.
I have been struggling to load a graphml in to Tinkerpop3.
Graph graphMLGraph = TinkerGraph.open();
graphMLGraph.io(IoCore.graphml()).readGraph(file.getAbsolutePath());
While loading, I want the the edges to have a label.
graphTraversalSource.E().toStream().forEach(edge -> {
System.out.println(edge.label());
});
The above code always prints the labels as edge for all the edges in the graphml. My graphml snippets.
<edge id="1" source="1" target="3">
<data key="edgelabel">belongs-to</data>
<data key="weight">1.0</data>
</edge>
<edge id="2" source="1" target="4">
<data key="weight">1.0</data>
<data key="edgelabel">part-of</data>
</edge>
And the key definition
<key attr.name="Edge Label" attr.type="string" for="edge" id="edgelabel"/>
I am use DSE 5.1.3's java driver and tinkerpop 3.2.5 is used via a transitive dependency and used Gephi to author the graphml.
By default, your edge label will be recognized if you do define the key as:
<key id="labelE" for="edge" attr.name="labelE" attr.type="string" />
The important part being that the attr.name is defaulted to "labelE". See the IO Reference documentation for GraphML here. Note that the default can be changed when you instantiate the GraphMLReader.Builder object by setting the edgeLabelKey value on the builder itself.
Below is the sample used in log4j 1.x. I am not getting any example to convert the same in log4j2.
<appender name="CoalescingStatisticsAppender"
class="org.perf4j.log4j.AsyncCoalescingStatisticsAppender">
<!--
The TimeSlice option is used to determine the time window for which
all received StopWatch logs are aggregated to create a single
GroupedTimingStatistics log. Here we set it to 10 seconds, overriding
the default of 30000 ms
-->
<param name="TimeSlice" value="30000" />
<appender-ref ref="perf4jFileAppender" />
</appender>
The Appender won't work as is in Log4j 2. It would have to be rewritten.
You may be interested to know that Log4j 2 supports nanoTime timestamps in PatternLayout. This, in combination with the low overhead Async Loggers, allow you to use Log4j as a rough profiling tool.
I'd like to set up a RollingFileAppender in log4net such that the current (i.e. today's) log file always has a static name (like app.log), but upon roll over at the end of the day, it should be renamed to app.<date>.log. Here's as close as I've got so far (note that I'm using every-minute rollover rather than every-day rollover since this is easier to debug):
<appender name="applog" type="log4net.Appender.RollingFileAppender">
<file value="app.log" />
<staticLogFileName value="false" />
<datePattern value=".yyyy-MM-dd-hh-mm" />
<preserveLogFileNameExtension value="true" />
<appendToFile value="true" />
<rollingStyle value="Date" />
<maxSizeRollBackups value="5" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date [%thread] %-5level %logger - %message%newline" />
</layout>
</appender>
The problem with this is that I see the following when a request begins:
app.2016-02-01-05-00.log
And by the time the request ends, I have these files:
app.2016-02-01-05-00.log
app.2016-02-01-05-00.log.2016-02-01-05-00.log
Notice that the minute hasn't rolled over yet, but it appears to have created a rollover file of some kind anyway. Also, today's file isn't ever called just 'app.log' as I want, it always starts with the timestamp in the name. Lastly, it doesn't appear to honor my maxSizeRollBackups of 5, as far as I can tell the backups grow indefinitely without ever getting deleted.
I tried removing the staticLogFileName tag, and that makes today's name 'app.log' like I want, but then it rolls over in place, overwriting itself and not creating backup files.
After breaking down and downloading the source code, it turns out to be a permission issue with the rollover's System.IO.File.Move() call. I needed to set the folder's Modify permission to true as well, not just Read and Write (which is strange, because isn't a move technically a write operation?).
I also discovered that you should NOT set staticLogFileName to false, so I had to remove that element from the xml.