I'm curious about TradingView.
If a figure is drawn at future coordinates rather than the current point of view, I am curious how to get the corresponding coordinates.
My coordinates are the 359th most up-to-date in the 360 array.
But it's up to date with bars so the future part doesn't work when drawing.
I want to find a way to get future coordinates.
I am using version 1.13
oh sorry I didn't ask in detail.
My question is I want to bring back a part I drew in the past.
However, the drawing line is drawn vertically because the linetool in the chart does not have candle data on the screen yet, so it must be drawn in the nodata zone.
But bybit doesn't. Can you help me?
const widgetOptions = {
debug: false,
symbol: this.props.symbol,
datafeed: Datafeed,
interval: this.props.interval,
container_id: this.props.containerId,
library_path: this.props.libraryPath,
locale: getLanguageFromURL() || 'en',
disabled_features: ['use_localstorage_for_settings'],
enabled_features: ['study_templates'],
charts_storage_url: this.props.chartsStorageUrl,
charts_storage_api_version: this.props.chartsStorageApiVersion,
client_id: this.props.clientId,
user_id: this.props.userId,
fullscreen: this.props.fullscreen,
autosize: this.props.autosize,
studies_overrides: this.props.studiesOverrides,
overrides: {
"mainSeriesProperties.showCountdown": true,
"paneProperties.background": "#131722",
"paneProperties.vertGridProperties.color": "#363c4e",
"paneProperties.horzGridProperties.color": "#363c4e",
"symbolWatermarkProperties.transparency": 90,
"scalesProperties.textColor" : "#AAA",
"mainSeriesProperties.candleStyle.wickUpColor": '#336854',
"mainSeriesProperties.candleStyle.wickDownColor": '#7f323f',
}
};
Datafeed.onReady(() => {
const widget = (window.tvWidget = new window.TradingView.widget(widgetOptions));
widget.onChartReady(() => {
console.log('Chart has loaded!')
});
});
I need to solve this part, but I still haven't been able to solve it
enter image description here
ok i solved it
It is to use the response value offset.
The solution was to look at the type of interval through the corresponding offset and add the corresponding time interval.
The best thing is to modify it to be able to draw in a future space, but I'm not sure how to do that.
enter image description here
Related
I am trying to optimise LCP for this page. I read an article on LCP optimisation where I also found a script which can help to determine which part of the LCP most time is spent on. Script:
const LCP_SUB_PARTS = [
'Time to first byte',
'Resource load delay',
'Resource load time',
'Element render delay',
];
new PerformanceObserver((list) => {
const lcpEntry = list.getEntries().at(-1);
const navEntry = performance.getEntriesByType('navigation')[0];
const lcpResEntry = performance
.getEntriesByType('resource')
.filter((e) => e.name === lcpEntry.url)[0];
// Ignore LCP entries that aren't images to reduce DevTools noise.
// Comment this line out if you want to include text entries.
if (!lcpEntry.url) return;
// Compute the start and end times of each LCP sub-part.
// WARNING! If your LCP resource is loaded cross-origin, make sure to add
// the `Timing-Allow-Origin` (TAO) header to get the most accurate results.
const ttfb = navEntry.responseStart;
const lcpRequestStart = Math.max(
ttfb,
// Prefer `requestStart` (if TOA is set), otherwise use `startTime`.
lcpResEntry ? lcpResEntry.requestStart || lcpResEntry.startTime : 0
);
const lcpResponseEnd = Math.max(
lcpRequestStart,
lcpResEntry ? lcpResEntry.responseEnd : 0
);
const lcpRenderTime = Math.max(
lcpResponseEnd,
// Prefer `renderTime` (if TOA is set), otherwise use `loadTime`.
lcpEntry ? lcpEntry.renderTime || lcpEntry.loadTime : 0
);
// Clear previous measures before making new ones.
// Note: due to a bug this does not work in Chrome DevTools.
// LCP_SUB_PARTS.forEach(performance.clearMeasures);
// Create measures for each LCP sub-part for easier
// visualization in the Chrome DevTools Performance panel.
const lcpSubPartMeasures = [
performance.measure(LCP_SUB_PARTS[0], {
start: 0,
end: ttfb,
}),
performance.measure(LCP_SUB_PARTS[1], {
start: ttfb,
end: lcpRequestStart,
}),
performance.measure(LCP_SUB_PARTS[2], {
start: lcpRequestStart,
end: lcpResponseEnd,
}),
performance.measure(LCP_SUB_PARTS[3], {
start: lcpResponseEnd,
end: lcpRenderTime,
}),
];
// Log helpful debug information to the console.
console.log('LCP value: ', lcpRenderTime);
console.log('LCP element: ', lcpEntry.element);
console.table(
lcpSubPartMeasures.map((measure) => ({
'LCP sub-part': measure.name,
'Time (ms)': measure.duration,
'% of LCP': `${
Math.round((1000 * measure.duration) / lcpRenderTime) / 10
}%`,
}))
);
}).observe({type: 'largest-contentful-paint', buffered: true});
For me, this was the result at the start in 4x CPU slowdown and Fast3G connection.
After that, since render delay was the area where I should focus on, I moved some of the scripts to the footer and also made the "deferred" scripts "async". This is the result:
We can see there is a clear improvement in LCP after the change but, when I test with lighthouse the result is different.
Before:
After:
I am in dilemma now about what step to take. Please suggest!!
I ran a trace of the URL you linked in your question, and the first thing I noticed is that your LCP resource finishes loading pretty early in the page, but it isn't able to render until a file called mirage2.min.js finishes loading.
This explains why your "Element render delay" portion of LCP is so long, and moving your scripts to the bottom of the page or seeing defer of them is not going to solve that problem. The solution is to make it so your LCP image can render without needing to wait until that JavaScript file finishes loading.
Another thing I noticed is this mirage2.min.js file is loaded from ajax.cloudflare.com, which made me think it's a "feature" offered by Cloudflare and not something you set up yourself.
Based on what I see here, I'm assuming that's true:
https://support.cloudflare.com/hc/en-us/articles/219178057
So my recommendation for you is to turn off this feature, because it's clearly not helping your LCP, as you can see in this trace:
There's one more thing you said that I think is worth clarifying:
After that, since render delay was the area where I should focus on, I moved some of the scripts to the footer and also made the "deferred" scripts "async". This is the result:
When I look at your "result" screenshot, I still see that the "element render delay" portion is still > 50%, so while you were correct when you said that "render delay was the area where I should focus on", the fact that it was still high after you made your changes (e.g. moving the scripts and using defer/async) was an indication that the changes you tried didn't fix the problem.
In this case, I believe that if you turn off the "Mirage" feature in your Cloudflare dashboard, you should see a big improvement.
Oh, one more thing, I noticed that you're using importance="high" on your image. This is old syntax that does not work anymore. You should replace that with fetchpriority="high" instead. See this post for details: https://web.dev/priority-hints/
Is there a way to use x,y coordinates stored as attributes of each node to layout a graph using the coordinates that were calculated by another program?
And if not, would it be possible to implement this somehow - how does one get started on this?
Is there a standard way how to provide the node positions to the cytoscape.js web viewer somehow?
Background: it is trivial to use Python networkx to calculate some layouts which are not supported in Cytoscape, and it would also be trivial to store the coordinates as node attributes. All that would then be need is some way for Cytoscape to use those coordinates to find node positions instead of using a layout algorithm.
This is a quite easy thing to do. Many examples on use this functionlity in the demos, as you can see here:
1: FCose Demo
2: Cose Blicent Demo
3: d3-force Demo
4: Euler Compound Demo
5: Spread Demo
As you can see, there are an abundance of examples for this in the demos, but also in the docs. You can see one here and here:
// can use reference to eles later
var eles = cy.add([
{ group: 'nodes', data: { id: 'n0' }, position: { x: 100, y: 100 } },
{ group: 'nodes', data: { id: 'n1' }, position: { x: 200, y: 200 } },
{ group: 'edges', data: { id: 'e0', source: 'n0', target: 'n1' } }
]);
The json used in the .add() function can be created in your js application or directly in Python and added to the graph as some of the examples do.
In general, you should take some time and skim through through the docs. The ability to position nodes via x and y is quite basic and is one of the first pages in the docs.
If you don't understand the description in the docs and the examples I provided, please edit your question and add your current code as a Minimal, Reproducible Example, where you can show your efforts to implement the positioning.
Edit:
As #maxkfranz pointed out, the preset layout plays a big role here. The documentation states this in the Initialisation Chapter:
If you want to specify your node positions yourself in your elements JSON, you can use the preset layout — by default it does not set any positions, leaving your nodes in their current positions (i.e. specified in options.elements at initialisation time).`
I am new to using Cytoscape so things that are "easy" are not so easy for me. I often don't even know the right way to ask a question. Everyone has things that are hard and things that are easy, and step by step we expand our knowledge so what is hard today may be easy tomorrow.
Anyway, here is something that may be a part of the solution you are looking for:
In the Cytoscape desktop application, you can create a "Style" that maps a node attribute to "X Location" and another node attribute to "Y Location".
I am learning about fluid dynamics (and Haxe) and have come across this awesome project and thought I would try to extend to it to help me learn. A demo of the original project in action can be seen here.
So far, I have created a side menu of items containing different shapes. When the user clicks on one of the shapes, then, clicks onto the canvas, the image selected should be imprinted onto the dye. The user will then move the mouse and explore the art etc.
To try and achieve this I did the following:
import js.html.webgl.RenderingContext;
function imageSelection(): Void{
document.querySelector('.myscrollbar1').addEventListener('click', function() {
// twilight image clicked
closeNav();
reset();
var image:js.html.ImageElement = cast document.querySelector('img[src="images/twilight.jpg"]');
gl.current_context.texSubImage2D(cast fluid.dyeRenderTarget.writeToTexture, 0, Math.round(mouse.x), Math.round(mouse.y), RenderingContext.RGB, RenderingContext.UNSIGNED_BYTE, image);
TWILIGHT = true;
});
After this call, inside the update function, I have the following:
override function update( dt:Float ){
time = haxe.Timer.stamp() - initTime;
performanceMonitor.recordFrameTime(dt);
//Smaller number creates a bigger ripple, was 0.016
dt = 0.090;//#!
//Physics
//interaction
updateDyeShader.isMouseDown.set(isMouseDown && lastMousePointKnown);
mouseForceShader.isMouseDown.set(isMouseDown && lastMousePointKnown);
//step physics
fluid.step(dt);
particles.flowVelocityField = fluid.velocityRenderTarget.readFromTexture;
if(renderParticlesEnabled){
particles.step(dt);
}
//Below handles the cycling of colours once the mouse is moved and then the image should be disrupted into the set dye colours.
}
However, although the project builds, I can't seem to get the image imprinted onto the canvas. I have checked the console log and I can see the following error:
WebGL: INVALID_ENUM: texSubImage2D: invalid texture target
Is it safe to assume that my cast for the first param is not allowed?
I have read that the texture target is the first parameter and INVALID_ENUM in particular means that one of the gl.XXX parameters are just flat out wrong for that particular function.
Looking through to the file writeToTexture is declared as so: public var writeToTexture (default, null):GLTexture;. WriteToTexture is a wrapper around a regular webgl handle.
I am using Haxe version 3.2.1 and using Snow to build the project. WriteToTexture is defined inside HaxeToolkit\haxe\lib\gltoolbox\git\gltoolbox\render
writeToTexture in gltoolbox is a GLTexture. With snow and snow_web, this is defined in snow.modules.opengl.GL as:
typedef GLTexture = js.html.webgl.Texture;
So we're simply dealing with a js.html.webgl.Texture here, or WebGLTexture in native JS.
Which means that yes, this is definitely not a valid value for texSubImage2D()'s target, which is specified to take one of the gl.TEXTURE_* constants.
A GLenum specifying the binding point (target) of the active texture.
From this description it's obvious that the parameter isn't actually for the texture itself - it merely gives some info on how the active texture should be used.
The question then becomes how the "active" texture can be set. bindTexture() can be used for this.
I have a path moving over time.
I use Cesium.sampleTerrain to get positions elevation and drape them on the terrain.
The problem is that, even if all points are on the terrain, the line connecting 2 points sometimes goes under the terrain.
How can I do to drape also connecting lines on the terrain?
Here is my code:
var promise = Cesium.sampleTerrain(terrainProvider, 14, positions);
Cesium.when(promise, function(updatedPositions) {
var cartesianPositions = Cesium.Ellipsoid.WGS84.cartographicArrayToCartesianArray(updatedPositions);
var sample = new Cesium.SampledPositionProperty();
sample.setInterpolationOptions({
interpolationDegree : 3,
interpolationAlgorithm : Cesium.HermitePolynomialApproximation
});
$(cartesianPositions).each(function(index, cartPosition) {
var time = Cesium.JulianDate.addSeconds(start, index*10, new Cesium.JulianDate());
sample.addSample(time, cartPosition);
})
var target = viewer.entities.add({
position: sample,
path: {
resolution: 60,
material:Cesium.Color.BLUE,
width: 4,
trailTime: 422*10,
leadTime: 0
}
});
});
So like Matthew says; Cesium doesn't currently support a 'polyline' type entity with draping over terrain.
If you find that the Entity API isn't giving you what you need, it might be worth digging into the lower-level Primitives API to gain finer control - more specifically the GroundPrimitive geometry.
Among others; GroundPrimitives currently support the CorridorGeometry.
I have no experience with temporal data plotting within Cesium, but I would suggest you consider this approach rather than the async promise approach, which (IMO) seems like more of a hack born from the absence of a GroundPrimitive-type solution at the time.
Here's a crude example of a GroundPrimitive in action (note we don't need any z values):
var viewer = new Cesium.Viewer('cesiumContainer');
var corridorInstance = new Cesium.GeometryInstance({
geometry : new Cesium.CorridorGeometry({
vertexFormat : Cesium.VertexFormat.POSITION_ONLY,
positions : Cesium.Cartesian3.fromDegreesArray([
-122.26, 46.15,
-122.12, 46.26,
]),
width : 100
}),
id : 'myCorridor',
attributes : {
color : new Cesium.ColorGeometryInstanceAttribute(0.0, 1.0, 1.0, 0.5)
}
});
var corridorPrimitive = new Cesium.GroundPrimitive({
geometryInstance : corridorInstance
});
viewer.scene.primitives.add(corridorPrimitive);
viewer.camera.setView({
destination: Cesium.Cartesian3.fromDegrees(-122.19, 46.20, 10000.0)
});
Which will give you this:
Cesium does not currently support draping lines on terrain, but it is on our road map and really important to us. This is actually an extremely complicated problem to handle correctly in all cases (and is even more complicated because of the limitations of WebGL). It will require a lot of research and experimentation and there's no hard timeline for when it will be finished. We should have a version of it for static lines by spring as part of our 3D Tiles work, but dynamic lines are probably further out.
If you're interested in following development of this feature, keep your eye on issue #2172 in our GitHub repository. We'll also make announcements on our blog/twitter/forum when this feature is part of an official release.
Using jQuery Flot, I can pass a null value to the plotting mechanism so it just won't draw anything on the plot. See how the missing records are suppressed:
I'm looking to move to d3js, so that I can have deeper low level control of the graphics using SVG. However, I have yet to find out how to do that same process of suppressing missing records. The image below is an attempt to do this, using a value of 0 instead of null (where the d3 package breaks down). Here is some code to give you an idea of how I produced the graph below:
var line = d3.svg.line()
.x(function(d) {
var date = new Date(d[0]);
return x(date);
})
.y(function(d) {
var height = d[1];
if (no_record_exists) {
return y(0);
}
return y(height) + 0.5;
});
I looked up the SVG path element at the Mozilla Developer Network, and I found out that there is a MoveTo command, M x y, that only moves the "pen" to some point without drawing anything. Has this been implemented in the d3js package, so that I won't have to create several path elements every time I encounter a missing record?
The defined function of d3.svg.line() is the way to do this
Let's say we want to include a break in the chart if y is null:
line.defined(function(d) { return d.y!=null; })
Use line.defined or area.defined, and see the Area with Missing Data example.