Effects framework "ConstructGSWithSO" to plain HLSL and C++ - directx

How do I convert such technique (from nVidia InstancedTesselation sample) to plain HLSL + DirectX 11 C++ code?
float4 PreprocessedLoDVS( uint id : SV_InstanceID, uniform int method) : LODS
{
float4 tessLevel;
if (method == 1) //Gregory
{
float3 positionControlPoints[20];
// 8 9 10 11
// 12 0\1 2/3 13
// 14 4/5 6\7 15
// 16 17 18 19
LoadGregoryPositionControlPoints(id, positionControlPoints);
tessLevel.x = evaluateEdgeLoD(positionControlPoints[16], positionControlPoints[14], positionControlPoints[12], positionControlPoints[8]);
tessLevel.y = evaluateEdgeLoD(positionControlPoints[19], positionControlPoints[15], positionControlPoints[13], positionControlPoints[11]);
tessLevel.z = evaluateEdgeLoD(positionControlPoints[16], positionControlPoints[17], positionControlPoints[18], positionControlPoints[19]);
tessLevel.w = evaluateEdgeLoD(positionControlPoints[8], positionControlPoints[9], positionControlPoints[10], positionControlPoints[11]);
}
else if (method == 0) { //Regular
float3 positionControlPoints[16];
// 0 1 2 3
// 4 5 6 7
// 8 9 10 11
// 12 13 14 15
LoadRegularControlPoints(id, positionControlPoints);
tessLevel.x = evaluateEdgeLoD(positionControlPoints[ 0], positionControlPoints[ 4], positionControlPoints[ 8], positionControlPoints[12]);
tessLevel.y = evaluateEdgeLoD(positionControlPoints[ 3], positionControlPoints[ 7], positionControlPoints[11], positionControlPoints[15]);
tessLevel.w = evaluateEdgeLoD(positionControlPoints[ 0], positionControlPoints[ 1], positionControlPoints[ 2], positionControlPoints[ 3]);
tessLevel.z = evaluateEdgeLoD(positionControlPoints[12], positionControlPoints[13], positionControlPoints[14], positionControlPoints[15]);
}
else if (method == 2) { //Bezier
float3 positionControlPoints[16];
// 0 1 2 3
// 4 5 6 7
// 8 9 10 11
// 12 13 14 15
LoadBezierPositionControlPoints(id, positionControlPoints);
tessLevel.x = evaluateEdgeLoD(positionControlPoints[ 0], positionControlPoints[ 4], positionControlPoints[ 8], positionControlPoints[12]);
tessLevel.y = evaluateEdgeLoD(positionControlPoints[ 3], positionControlPoints[ 7], positionControlPoints[11], positionControlPoints[15]);
tessLevel.z = evaluateEdgeLoD(positionControlPoints[12], positionControlPoints[13], positionControlPoints[14], positionControlPoints[15]);
tessLevel.w = evaluateEdgeLoD(positionControlPoints[ 0], positionControlPoints[ 1], positionControlPoints[ 2], positionControlPoints[ 3]);
}
else if (method == 3) { //Pm
// 18 14 13 12
// 19 8
// 20 7
// 0 1 2 6
float3 positionControlPoints[24];
LoadPmControlPoints(id, positionControlPoints);
tessLevel.x = evaluateEdgeLoD(positionControlPoints[ 0], positionControlPoints[20], positionControlPoints[19], positionControlPoints[18]);
tessLevel.y = evaluateEdgeLoD(positionControlPoints[ 6], positionControlPoints[ 7], positionControlPoints[ 8], positionControlPoints[12]);
tessLevel.z = evaluateEdgeLoD(positionControlPoints[18], positionControlPoints[14], positionControlPoints[13], positionControlPoints[12]);
tessLevel.w = evaluateEdgeLoD(positionControlPoints[ 0], positionControlPoints[ 1], positionControlPoints[ 2], positionControlPoints[ 6]);
}
else {
tessLevel=float4(2,2,2,2);
}
return tessLevel;
}
technique10 LoDRegularTechnique
{
pass P0
{
SetDepthStencilState( DisableDepthWrites, 0 );
SetVertexShader( CompileShader( vs_4_0, PreprocessedLoDVS(0) ) );
SetGeometryShader( ConstructGSWithSO( CompileShader( vs_4_0, PreprocessedLoDVS(0) ), "LODS.xyzw" ) );
SetPixelShader( NULL );
}
}
PreprocessedLoDVS looks like usual vertex shader, except "LODS" signature, and what about geometry shader?

This link explains it pretty nicely, but basically to summarize.
Compile your shader to blob, using PreprocessedLoDVS and vs_4_0 as profile
Create an Stream output layout, which is an array of D3D11_SO_DECLARATION_ENTRY , that will be LODS semantic with 4 components.
Create your vertex shader using CreateGeometryShaderWithStreamOut
Create a buffer with D3D11_BIND_STREAM_OUTPUT flag (same as you would create any other buffer, from that sample you will need also Shader Resource Flag, since you will need SRV to bind back as Buffer input).
Bind this buffer to stream output (using SOSetTargets)
Set your Vertex Shader to the pipeline and do your draw call.

Related

Find the sum of the difference of maximum and minimum element of all subarrays

sample input: 3 1 4 2
output: 1) Subarrays of size 1 : (3),(1),(4),(2) , sum = 0 + 0 + 0 + 0 = 0.
2) Subarrays of size 2: [3, 1], [1, 4], [4, 2], sum = 2 + 3 + 2 = 7.
3) Subarrays of size 3:- [3, 1, 4], [1, 4, 2], sum = 3 + 3 = 6.
4) Subarrays of size 4:- [3, 1, 4, 2], sum = 3
Total sum = 16
Here the problem can solve in many ways, hope this might help you to get your ideas.
arr = [3, 1, 4, 2]
lists = []
res = 0
for i in range(len(arr) + 1):
for j in range(i):
if len(arr[j: i]) > 1:
res += np.max(arr[j:i]) - np.min(arr[j:i])
lists.append(arr[j:i])
print(res)

Calculate sum of every 5 elements in array of Integer in Swift iOS

In Swift 3, how can we calculate sum of every 5 elements in array of Int.
For example, we have an array [1,2,3,4,5,6,7,8,9,0,12,23]
1+2+3+4+5 = 15
6+7+8+9+0 = 30
12+23+0+0+0 = 35
The result something like this [15,30,35]
Here is my solution in playgroud:
//: Playground - noun: a place where people can play
import UIKit
var arr = [1,1,1,1,1,2,2,2,2,2,3,3,3,3,3]
let chunkSize = 5
let chunks = stride(from: 0, to: arr.count, by: chunkSize).map {
Array(arr[$0..<min($0 + chunkSize, arr.count)])
}
print(chunks)
var summ = chunks.map { $0.reduce(0, {$0 + $1}) }
print(summ)
OUTPUT:
[[1, 1, 1, 1, 1], [2, 2, 2, 2, 2], [3, 3, 3, 3, 3]]
[5, 10, 15]
Take a look at:
Finding sum of elements in Swift array

How to optimize the solution for Two_sum code in ruby

I am woking on the solution for the following question.
Given an array of integers, return indices of the two numbers such that they add up to a specific target.
You may assume that each input would have exactly one solution.
Example:
Given nums = [2, 7, 11, 15], target = 9,
Because nums[0] + nums[1] = 2 + 7 = 9,
return [0, 1].
This is the solution submitted in ruby after referring the C++ code http://leetcodeunlock.com/2016/05/20/leetcode-1-two-sum-easy/ .
def two_sum(nums, target)
hash = {}
arr = []
nums.each_with_index do |value,index|
y = target - value
if(hash.find{|key,val| key == value})
arr << hash[value]
arr << index
return arr
else
hash[y] = index
end
end
end
My submission failed with the message : Time limit exceeded. Can anyone point out the mistake and help me optimise the code?
nums = [2, 7, 11, 15]
target = 9
# this will find all combinations of 2 elements that add up to 9
results = (0...nums.size).to_a.combination(2).select { |first, last| nums[first] + nums[last] == target }
results.first #=> [0, 1]
Explanation of some parts of the code:
# Get indexes of all elements of nums array
(0...nums.size).to_a #=> [0, 1, 2, 3]
# Generate all combinations of indexes of each 2 elements
(0...nums.size).to_a.combination(2).to_a #=> [[0, 1], [0, 2], [0, 3], [1, 2], [1, 3], [2, 3]]
I have modified the line
if(hash.find{|key,val| key == value})
to
if(hash.key?(value))
to find if a specific key is present in the hash and this solved the issue.
Code
def sum_to_num(arr, num)
return [num/2, num/2] if num.even? && arr.count(num/2) > 1
a = arr.uniq.
group_by { |n| (2*n-num).abs }.
find { |_,a| a.size > 1 }
a.nil? ? nil : a.last
end
This method requires three or four passes through the array, if num is even, one to count the instances of num/2, one to remove duplicate values, one to group_by and one to find the pair of numbers that sum to the desired total. It therefore should be much faster than methods that evaluate every pair of the array's elements, particularly as the size of the array is increased.
Examples
sum_to_num [2, 11, 7, 15], 9
#=> [2, 7]
sum_to_num [2, 5, 2, 6, 1, -5, 4], 10
#=> [6, 4]
sum_to_num [2, 7, 11, -7, 15], 0
#=> [7, -7]
sum_to_num [2, 7, 11, 7, 15], 14 #???
sum_to_num [2, -7, 11, -7, 15], -14 #???
sum_to_num [2, 7, 11, 15], 17
#=> [2, 15]
sum_to_num [2, -11, 8, 15], 4
#=> [-11, 15]
sum_to_num [2, -11, 8, 15], -3
#=> [-11, 8]
sum_to_num [2, -11, 8, 15], 100
#=> nil
Explanation
Assume x and y sum to num. Then
2*x-num + 2*y-num = 2*(x+y) - 2*num
= 2*num - 2*num
= 0
meaning that 2*x-num and 2*y-num are either both zero or they have the opposite signs and the same absolute value. Similarly, if 2*x-num and 2*y-num sum to zero, then
2*x-num + 2*y-num = 0
2*(x+y) - 2*num = 0
meaning that n+m = num (which is hardly surprising considering that 2*x+num is a linear transformation.
Suppose
arr = [2, 5, 2, 6, 1, -5, 4]
num = 10
then
if num.even? && arr.count(num/2) > 1
#=> if 10.even? && arr.count(5) > 1
#=> if true && false
#=> false
Therefore, do not return [5,5].
b = arr.uniq
#=> [2, 5, 6, 1, -5, 4]
c = b.group_by { |n| (2*n-num).abs }
#=> {6=>[2], 0=>[5], 2=>[6, 4], 8=>[1], 20=>[-5]}
a = c.find { |_,a| a.size > 1 }
#=> [2, [6, 4]]
return nil if a.nil?
# do not return
a.last
#=> [6, 4]
I was doing this challenge for fun and wrote a cleaned up ruby solution.
def two_sum(nums, target)
hash = {}
nums.each_with_index { |number, index| hash[number] = index }
nums.each_with_index do |number, index|
difference = target - number
if hash[difference] && hash[difference] != index
return [index, hash[difference]]
end
end
end
# #param {Integer[]} nums
# #param {Integer} target
# #return {Integer[]}
def two_sum(nums, target)
length = nums.length
for i in 0..length
j = i+1
for a in j..length
if j < length
if nums[i] + nums[a] == target
return [i, a]
end
end
j+=1
end
end
[]
end
Well this is my way of solving this
def two_sum(nums, target)
nums.each_with_index do |value, index|
match_index = nums.find_index(target - value)
return [index, match_index] if match_index
end
nil
end
The above has the advantage that it stops execution when a match is found and so hopefully won't time out. :)

How to sort map's values?

Can someone give me a hint? I want to sort a map's values by the length of the lists.
var chordtypes = {
"maj": [0, 4, 7],
"M7": [0, 4, 7, 11],
"m7": [0, 3, 7, 10],
"6": [0, 4, 7, 9],
"9": [0, 4, 7, 10, 14],
"sus2": [0, 2, 7],
"sus4": [0, 5, 7],
"omit3": [0, 7],
"#5": [0, 4, 8],
"+7b9#11": [0, 4, 8, 10, 13, 18],
"+9": [0, 4, 8, 10, 14]
};
A function that does sort a Map of List on their length.
import 'dart:collection';
/// sorts the ListMap (== A Map of List<V>) on the length
/// of the List values.
LinkedHashMap sortListMap(LinkedHashMap map) {
List mapKeys = map.keys.toList(growable : false);
mapKeys.sort((k1, k2) => map[k1].length - map[k2].length);
LinkedHashMap resMap = new LinkedHashMap();
mapKeys.forEach((k1) { resMap[k1] = map[k1] ; }) ;
return resMap;
}
result for :
var res = sortListMap(chordtypes);
print(res);
==>
{ omit3: [0, 7],
maj: [0, 4, 7],
sus2: [0, 2, 7],
sus4: [0, 5, 7],
#5: [0, 4, 8],
M7: [0, 4, 7, 11],
m7: [0, 3, 7, 10],
6: [0, 4, 7, 9],
9: [0, 4, 7, 10, 14],
+9: [0, 4, 8, 10, 14],
+7b9#11: [0, 4, 8, 10, 13, 18] }
Using DART language:
Let's say you want to sort a Map with integer key and value of type Foo:
class Foo {
int x; //it can be any type
}
So you can get the list of all the entries, sort them like a normal list and then rebuild the map:
Map<int, Foo> map = //fill map
var entries = map.entries.toList();
entries.sort((MapEntry<int, Foo> a, MapEntry<int, Foo> b) => a.value.x.compareTo(b.value.x));
map = Map<int, Foo>.fromEntries(entries);
Something like this could work for you:
Map chordtypes = {
"maj": [0, 4, 7],
"M7": [0, 4, 7, 11],
"m7": [0, 3, 7, 10],
"6": [0, 4, 7, 9],
"9": [0, 4, 7, 10, 14],
"sus2": [0, 2, 7],
"sus4": [0, 5, 7],
"omit3": [0, 7],
"#5": [0, 4, 8],
"+7b9#11": [0, 4, 8, 10, 13, 18],
"+9": [0, 4, 8, 10, 14]
};
List keys = chordtypes.keys.toList();
keys.sort((k1, k2) {
if(chordtypes[k1].length > chordtypes[k2].length)
return -1;
if(chordtypes[k1].length < chordtypes[k2].length)
return 1;
return 0;
});
keys.forEach((String k) {
print('$k ${chordtypes[k]}');
});
Building on #Leonardo Rignanese's answer. An extension function for a more functional approach:
extension MapExt<T, U> on Map<T, U> {
Map<T, U> sortedBy(Comparable value(U u)) {
final entries = this.entries.toList();
entries.sort((a, b) => value(a.value).compareTo(value(b.value)));
return Map<T, U>.fromEntries(entries);
}
}
General usage:
foos.sortedBy((it) => it.bar);
Usage for OP:
final sortedChordtypes = chordtypes.sortedBy((it) => it.length);
Gist here: https://gist.github.com/nmwilk/68ae0424e848b9f05a8239db6b708390

jquery flot, graph point not place correct

my graph get data from server, time server * 1000 for convert to time js. And here my graph
http://awesomescreenshot.com/03f1ft5k72
The poin not match the column day. I must + 25 000 000 to each value to show it match column day. Why this happen,I'm new with flot. Can you share me your experence.
var d_register = #{ #total.map{ |t| [t.date.to_time.to_i * 1000 , t.register] } };
// must add 25 000 000 to it for correct position point.
// var d_register = #{ #total.map{ |t| [t.date.to_time.to_i * 1000 + 25000000, t.register] } };
var curr = new Date(#{Date.current.to_time.to_i * 1000}); // get current date
define_flot();
function define_flot(){
var first = new Date(curr.getTime() - curr.getDay() * 24 * 60 * 60 * 1000);
var last = new Date(first.getTime() + 7 * 24 * 60 * 60 * 1000);
$(".week-time").text(
(first.getMonth() + 1) + '/' +
first.getDate() + '/' +
first.getFullYear() + ' - ' +
(last.getMonth() + 1) + '/' +
last.getDate() + '/' +
last.getFullYear() );
var first_day_in_week = first.getTime();
var last_day_in_week = last.getTime();
$.plot("#placeholder", [ d_register ], {
yaxis: {
tickDecimals: 0
},
xaxis: {
mode: "time",
minTickSize: [1, "day"],
min: first_day_in_week,
max: last_day_in_week,
timeformat: "%a"
}
});
}
data_register :
[[1370710800000, 1], [1370797200000, 7], [1370883600000, 1], [1371056400000, 0], [1371142800000, 0], [1371747600000, 0], [1371834000000, 0], [1371920400000, 0], [1372006800000, 0], [1372093200000, 0], [1372179600000, 0]]
data_register after + 25000000:
[[1370735800000, 1], [1370822200000, 7], [1370908600000, 1], [1371081400000, 0], [1371167800000, 0], [1371772600000, 0], [1371859000000, 0], [1371945400000, 0], [1372031800000, 0], [1372118200000, 0], [1372204600000, 0]]
Your screenshot link appears to have broken, but as I recall the problem was that the values were supposed to be aligned to the beginning of a day, but were showing offset by a few hours.
The problem is that Flot expects UTC timestamps, while yours are offset. 1370710800000, for example, is actually June 8, 2013 at 17:00 GMT. By adding 25M - 1370735800000 - that becomes 23:56:40, which is close enough that it appears to line up to the day boundary. Really what you want to add is 7 (hours) * 3600 (seconds) * 1000 (milliseconds) = 25200000, which compensates for what I'm assuming is your local timezone offset of GMT-7, US Pacific Time.

Resources