I am trying to work with matrixes; I have a model that has an attribute called "board", and its just a 4x4 matrix. I display this board in my view. So far so good. When I click a button, I send the param "board" with, for example, this structure:
{"utf8"=>"✓", "game_master"=>{"board"=>"Matrix[[0, 0, 0, 0], [0, 0, 1, 1], [0, 0, 1, 0], [1, 1, 0, 0]]"}, "commit"=>"Yolo"}
On the other side, in the controller, I try to recreate this board by creating a new gamemaster with board = Matrix[[0, 0, 0, 0], [0, 0, 1, 1], [0, 0, 1, 0], [1, 1, 0, 0]]. So far so good (NOT, I know that the param[:board] is just a string, that's my problem). Then, later on, when trying to iterate the matrix, I get this error:
undefined method `each_with_index' for "Matrix[[0, 0, 0, 0], [0, 0, 1, 1], [0, 0, 1, 0], [1, 1, 0, 0]]":String
Clearly, I bound :board to a string NOT a matrix. How would I go around converting that string into the corresponding matrix?
Thanks
UPDATE:
game_masters_controller.rb
def step
#game_master = GameMaster.new(game_master_params)
#game_master.step
respond_to do |format|
format.js
end
end
And:
private
def game_master_params
params.require(:game_master).permit(:board)
end
game_master.rb
def initialize(attributes = {})
attributes.each do |name, value|
send("#{name}=", value)
end
if(self.board == nil)
self.board = get_new_board
end
end
Simply do:
arr = params[:game_master][:board].split(',').map(&:to_i).each_slice(4).to_a
# => [[0, 0, 0, 0], [0, 0, 1, 1], [0, 0, 1, 0], [0, 1, 0, 0]]
require 'matrix'
matrix = Matrix[*arr]
# => Matrix[[0, 0, 0, 0], [0, 0, 1, 1], [0, 0, 1, 0], [0, 1, 0, 0]]
Quick and dirty and not very secure:
class GameMaster
...
def board=(attr)
#board = eval attr
end
end
I would not run eval on something that gets submitted via a form. If the matrix is always 4x4, I would probably just submit the values in one long comma separated string like 0,0,0,1,1,1,0 .... Then I would use String#split to turn the large string into an array. Once you have one big array you could loop through it to generate an array of arrays that you can send to Matrix.new
string_params = 0,1,1,0,0,1
array_of_string = string_params.split(',')
array_of_arrays = array_of_string.each_slice(4).to_a
matrix = Matrix.new(array_of_arrays)
That should point you in the right direction.
Good luck!
Try this code:
(as the other answers mentioned, it's not secure to eval code coming from an input)
require 'matrix'
m = eval "Matrix[[0, 0, 0, 0], [0, 0, 1, 1], [0, 0, 1, 0], [1, 1, 0, 0]]"
=> Matrix[[0, 0, 0, 0], [0, 0, 1, 1], [0, 0, 1, 0], [1, 1, 0, 0]]
m.transpose
=> Matrix[[0, 0, 0, 1], [0, 0, 0, 1], [0, 1, 1, 0], [0, 1, 0, 0]]
Requiring the matrix.rb file will give you access to a lot of useful methods, check the documentation for further information.
http://ruby-doc.org/stdlib-2.1.0/libdoc/matrix/rdoc/Matrix.html
Related
I am struggling with the problem I am facing:
I have a dataset of different products (Cars) that have certain Work Orders open at a given time. I know from historical data how much time this work in TOTAL has caused.
Now I want to predict it for another Car (e.g. Car 3).
Which type of algorithm, regression shall I use for this?
My idea was to transform this row based dataset into column based with binary values e.g. Brake: 0/1, Screen 0/1.. But then I will have lots of Inputs as the number of possible Inputs is 100-200..
Here's a quick idea using multi-factor regression for 30 jobs, each of which is some random accumulation of 6 tasks with a "true cost" for each task. We can regress against the task selections in each job to estimate the cost coefficients that best explain the total job costs.
First done w/ no "noise" in the system (tasks are exact), then with some random noise.
A "more thorough" job would include examining the R-squared value and plotting the residuals to ensure linearity.
In [1]: from sklearn import linear_model
In [2]: import numpy as np
In [3]: jobs = np.random.binomial(1, 0.6, (30, 6))
In [4]: true_costs = np.array([10, 20, 5, 53, 31, 42])
In [5]: jobs
Out[5]:
array([[0, 1, 1, 1, 1, 0],
[1, 0, 0, 1, 0, 1],
[1, 1, 0, 1, 0, 0],
[1, 0, 1, 1, 1, 1],
[1, 1, 0, 0, 1, 1],
[0, 1, 0, 0, 1, 0],
[1, 0, 0, 1, 1, 0],
[1, 1, 1, 1, 0, 1],
[1, 0, 0, 1, 0, 1],
[0, 1, 0, 1, 0, 0],
[0, 0, 1, 0, 1, 1],
[1, 0, 1, 1, 1, 1],
[0, 1, 1, 1, 1, 1],
[1, 0, 1, 1, 1, 1],
[0, 1, 1, 0, 1, 0],
[1, 0, 1, 0, 1, 0],
[1, 1, 1, 1, 1, 1],
[1, 0, 1, 0, 0, 1],
[0, 1, 0, 1, 1, 0],
[1, 1, 1, 0, 1, 0],
[1, 1, 1, 1, 1, 0],
[1, 0, 1, 0, 0, 1],
[0, 0, 0, 1, 1, 1],
[1, 1, 0, 1, 1, 1],
[1, 0, 1, 1, 0, 1],
[1, 1, 1, 1, 1, 1],
[1, 0, 1, 1, 1, 1],
[0, 0, 1, 1, 0, 0],
[1, 1, 0, 0, 1, 1],
[1, 1, 1, 1, 0, 0]])
In [6]: tot_job_costs = jobs # true_costs
In [7]: reg = linear_model.LinearRegression()
In [8]: reg.fit(jobs, tot_job_costs)
Out[8]: LinearRegression()
In [9]: reg.coef_
Out[9]: array([10., 20., 5., 53., 31., 42.])
In [10]: np.random.normal?
In [11]: noise = np.random.normal(0, scale=5, size=30)
In [12]: noisy_costs = tot_job_costs + noise
In [13]: noisy_costs
Out[13]:
array([113.94632664, 103.82109478, 78.73776288, 145.12778089,
104.92931235, 48.14676751, 94.1052639 , 134.64827785,
109.58893129, 67.48897806, 75.70934522, 143.46588308,
143.12160502, 147.71249157, 53.93020167, 44.22848841,
159.64772255, 52.49447057, 102.70555991, 69.08774251,
125.10685342, 45.79436364, 129.81354375, 160.92510393,
108.59837665, 149.1673096 , 135.12600871, 60.55375843,
107.7925208 , 88.16833899])
In [14]: reg.fit(jobs, noisy_costs)
Out[14]: LinearRegression()
In [15]: reg.coef_
Out[15]:
array([12.09045186, 19.0013987 , 3.44981506, 55.21114084, 33.82282467,
40.48642199])
In [16]:
The idea is simple, but the execution is bothering me.
I've created a small random dungeon generator that create a grid like this:
000001
000111
000111
001101
011101
011111
This is a sample 6x6 dungeon where 0 is a wall and 1 is an open path.
The conversion from this to some sort of tile id map is simple, and trivial, but creating the image itself is the hard part.
I want to know if there's a lib, or method to achieve that. If not, then what would you do?
This is not part of a game, and only a dungeon generator for DND. Any language is OK, but the generator was made in Go.
You can use OpenCV for this task. Probably PIL can do the same, don't have exp with it.
import cv2
import numpy as np
data_list = [
[0, 0, 0, 0, 0, 1],
[0, 0, 0, 1, 1, 1],
[0, 0, 0, 1, 1, 1],
[0, 0, 1, 1, 0, 1],
[0, 1, 1, 1, 0, 1],
[0, 1, 1, 1, 1, 1]
]
arr = np.array(data_list, dtype=np.uint8) * 255
arr = cv2.resize(arr, (0, 0), fx=50, fy=50, interpolation=cv2.INTER_NEAREST)
cv2.imshow("img", arr)
cv2.waitKey()
# or you can save on disk
cv2.imwrite("img.png", arr)
use np.block()
# a bunch of sprites/images, all the same size
# load them however you like
tiles = [...]
data_list = [
[0, 0, 0, 0, 0, 1],
[0, 0, 0, 1, 1, 1],
[0, 0, 0, 1, 1, 1],
[0, 0, 1, 1, 0, 1],
[0, 1, 1, 1, 0, 1],
[0, 1, 1, 1, 1, 1]
]
picture = np.block([
[tiles[k] for k in row]
for row in data_list
])
Or, if you use any kind of game engine, or something even more trivial, like SDL/PyGame, simply "blit" each tile.
PIL, as you found out, is perfectly capable of blitting one image (tile) onto another (whole map).
I kind of managed to get a solution, but it will be a Python only.
Using PIL I can make a mosaic with tile images and create the map. It's not a solid solution made from scratch but it can do the Job.
I'm still open for another approach.
My solution is this method here:
matrix = np.loadtxt(input_file, usecols=range(matrix_square), dtype=int)
tiles = []
for file in glob.glob("./tiles/*"):
im = Image.open(file)
tiles.append(im)
output = Image.new('RGB', (image_width,image_height))
for i in range(matrix_width):
for j in range(matrix_height):
x,y = i*tile_size,j*tile_size
index = matrix[j][i]
output.paste(tiles[index],(x,y))
output.save(output_file)
The matrix_square is the matrix dimensions (as a square). I'm still working on a better solution, but this is working fine for me.
You need to change the tile_size to match the tile resolution that you're using.
This is a generated dungeon with this method
The tiles are bad, but the grid is fine enough.
I am using torchmetrics.functional to evaluate my trained model and I get this error. I have attached what my tensor values look like and I belive I can make out the reason behind the error, my dataset includes non-binary values as labels. How do I work around this issue? I really appreciate you time.
Evaluation:
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
trained_model = trained_model.to(device)
val_dataset = Dataset(
val_df,
tokenizer,
max_token_len=MAX_TOKEN_COUNT
)
predictions = []
labels = []
for item in tqdm(val_dataset):
_, prediction = trained_model(
item["input_ids"].unsqueeze(dim=0).to(device),
item["attention_mask"].unsqueeze(dim=0).to(device)
)
predictions.append(prediction.flatten())
labels.append(item["labels"].int())
predictions = torch.stack(predictions).detach().cpu()
labels = torch.stack(labels).detach().cpu()
Tensor Value:
tensor([[0.2794, 1.0000, 0.1865, ..., 0.0341, 0.0219, 0.8706],
[0.2753, 1.0000, 0.1864, ..., 0.0352, 0.0218, 0.8693],
[0.2747, 1.0000, 0.1858, ..., 0.0421, 0.0227, 0.8290],
...,
[0.2729, 1.0000, 0.1879, ..., 0.0430, 0.0231, 0.8263],
[0.2835, 1.0000, 0.1814, ..., 0.0363, 0.0215, 0.8570],
[0.2734, 1.0000, 0.1881, ..., 0.0430, 0.0232, 0.8277]])
tensor([[0, 2, 0, ..., 0, 0, 0],
[0, 3, 0, ..., 0, 0, 0],
[0, 1, 0, ..., 0, 0, 1],
...,
[0, 2, 0, ..., 0, 0, 1],
[0, 2, 0, ..., 0, 0, 2],
[0, 1, 1, ..., 0, 0, 1]], dtype=torch.int32)
accuracy(predictions, labels, threshold=THRESHOLD)
ValueError: If preds and target are of shape (N, ...) and preds are floats, target should be binary.
Why is the structuring element asymmetric in OpenCV?
cv2.getStructuringElement(cv2.MORPH_ELLIPSE, ksize=(4,4))
returns
array([[0, 0, 1, 0],
[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1]], dtype=uint8)
Why isn't it
array([[0, 1, 1, 0],
[1, 1, 1, 1],
[1, 1, 1, 1],
[0, 1, 1, 0]], dtype=uint8)
instead?
Odd-sized structuring elements are also asymmetric with respect to 90-degree rotations:
array([[0, 0, 1, 0, 0],
[1, 1, 1, 1, 1],
[1, 1, 1, 1, 1],
[1, 1, 1, 1, 1],
[0, 0, 1, 0, 0]], dtype=uint8)
What's the purpose of that?
There's no purpose for it other than it's one of many possible interpolations for such a shape. In the case of the ellipse with size 5, if it were full it would just be the same as the MORPH_RECT and if the same two were removed from the sides as from the top it would be a diamond. Either way, the way it's actually implemented in the source code is what you would expect---it creates a circle via the distance function and takes near integers to get the binary pixels. Search that file for cv::getStructuringElement and you'll find the implementation, it's nothing too fancy.
If you think an update to this function should be made, then open up a PR on GitHub with the implemented version, or an issue to discuss it first. I think a successful contribution would be easy here and I'd venture that the case for symmetry is strong. One would expect the result of a symmetric image being processed with an elliptical kernel wouldn't depend on orientation of the image.
Given an array [0, 0, 1, 0, 1], is there a built-in method to get all of the indexes of values greater than 0? So, the method should return [2, 4].
find_index only returns the first match.
Working in Ruby 1.9.2.
In Ruby 1.8.7 and 1.9, iterator methods called without a block return an Enumerator object. So you could do something like:
[0, 0, 1, 0, 1].each_with_index.select { |num, index| num > 0 }.map { |pair| pair[1] }
# => [2, 4]
Stepping through:
[0, 0, 1, 0, 1].each_with_index
# => #<Enumerator: [0, 0, 1, 0, 1]:each_with_index>
_.select { |num, index| num > 0 }
# => [[1, 2], [1, 4]]
_.map { |pair| pair[1] }
# => [2, 4]
I would do
[0, 0, 1, 0, 1].map.with_index{|x, i| i if x > 0}.compact
And if you want that as a single method, ruby does not have a built in one, but you can do:
class Array
def select_indice &p; map.with_index{|x, i| i if p.call(x)}.compact end
end
and use it as:
[0, 0, 1, 0, 1].select_indice{|x| x > 0}