I started to program in Chisel recently and I need to use dsptools for my project. However I am having issues to even have a very simple case working.
For example the code below:
package radix2
import chisel3._
import chisel3.experimental._
import chisel3.util._
import dsptools._
import dsptools.numbers._
class Radix2Butterfly extends Module {
val io = IO(new Bundle {
val x1 = Input(FixedPoint(6.W, binaryPoint = 2.BP))
val x2 = Input(FixedPoint(6.W, binaryPoint = 2.BP))
val y1 = Output(FixedPoint(12.W, binaryPoint = 4.BP))
val y2 = Output(FixedPoint(12.W, binaryPoint = 4.BP))
})
// Real op
val twiddle = 1.0.F(2.BP)
io.y1 := io.x1 + twiddle * io.x2
io.y2 := io.x1 - twiddle * io.x2
}
object Radix2ButterflyMain extends App {
println("Generating the Butterfly hardware.")
emitVerilog(new Radix2Butterfly(), Array("--target-dir", "generated"))
}
Works without issue after doing sbt test (I have a simple test).
However just adding a single line with a call to dsptools like this:
package radix2
import chisel3._
import chisel3.experimental._
import chisel3.util._
import dsptools._
import dsptools.numbers._
class Radix2Butterfly extends Module {
val io = IO(new Bundle {
val x1 = Input(FixedPoint(6.W, binaryPoint = 2.BP))
val x2 = Input(FixedPoint(6.W, binaryPoint = 2.BP))
val a1 = Input(DspComplex(FixedPoint(6.W, 4.BP), FixedPoint(6.W, 4.BP)))
val y1 = Output(FixedPoint(12.W, binaryPoint = 4.BP))
val y2 = Output(FixedPoint(12.W, binaryPoint = 4.BP))
})
// Real op
val twiddle = 1.0.F(2.BP)
io.y1 := io.x1 + twiddle * io.x2
io.y2 := io.x1 - twiddle * io.x2
}
object Radix2ButterflyMain extends App {
println("Generating the Butterfly hardware.")
emitVerilog(new Radix2Butterfly(), Array("--target-dir", "generated"))
}
Produces the following error:
[info] - should pass *** FAILED ***
[info] java.lang.AssertionError: assertion failed: The Chisel compiler plugin is now required for compiling Chisel code. Please see https://github.com/chipsalliance/chisel3#build-your-own-chisel-projects.
[info] at ... ()
[info] at dsptools.numbers.DspComplex.<init>(DspComplex.scala:59)
[info] at dsptools.numbers.DspComplex$.apply(DspComplex.scala:24)
[info] at radix2.Radix2Butterfly$$anon$1.$anonfun$a1$1(Radix2Butterfly.scala:21)
[info] at chisel3.internal.plugin.package$.autoNameRecursively(package.scala:33)
[info] at radix2.Radix2Butterfly$$anon$1.<init>(Radix2Butterfly.scala:21)
[info] at radix2.Radix2Butterfly.$anonfun$io$2(Radix2Butterfly.scala:13)
[info] at chisel3.internal.prefix$.apply(prefix.scala:48)
[info] at radix2.Radix2Butterfly.$anonfun$io$1(Radix2Butterfly.scala:13)
[info] at chisel3.internal.plugin.package$.autoNameRecursively(package.scala:33)
[info] ...
My file build.sbt looks like this:
// scalaVersion := "2.13.7"
scalaVersion := "2.12.13"
scalacOptions ++= Seq(
"-deprecation",
"-feature",
"-unchecked",
// "-Xfatal-warnings",
// "-Xsource:2.11", // not for 3.5, but for 3.4
"-language:reflectiveCalls",
"-Xcheckinit",
// Enables autoclonetype2
"-P:chiselplugin:genBundleElements" // not for 3.5, but for 3.4
)
resolvers ++= Seq(
Resolver.sonatypeRepo("snapshots"),
Resolver.sonatypeRepo("releases")
)
val chiselVersion = "3.5.3"
addCompilerPlugin("edu.berkeley.cs" %% "chisel3-plugin" % chiselVersion cross CrossVersion.full)
libraryDependencies += "edu.berkeley.cs" %% "chisel3" % chiselVersion
libraryDependencies += "edu.berkeley.cs" %% "chisel-iotesters" % "2.5.0"
libraryDependencies += "edu.berkeley.cs" %% "chiseltest" % "0.5.3"
libraryDependencies += "edu.berkeley.cs" %% "rocket-dsptools" % "1.2.6"
Which I believe has everything I need including the Chisel compiler plugin that the error output refers to. Would greatly appreciate help to fix that issue.
Thanks a lot.
rocket-dsptools is compiled against Chisel 3.2.6 [1]. Chisel only maintains binary compatibility for major versions (where the versioning scheme is <epoch>.<major>.<minor>, see [2]). rocket-dsptools is not maintained by anyone so you cannot use it with the newest versions of Chisel. If you would like to, you will need to build it from source (and likely update a lot of things since 3.2.6 is over 2 years old).
[1] See the dependencies in the Maven POM: https://search.maven.org/artifact/edu.berkeley.cs/rocket-dsptools_2.12/1.2.6/jar
[2] Chisel Project Versioning: https://www.chisel-lang.org/chisel3/docs/appendix/versioning.html
Related
I was trying to deploy ml model using node_js with help of ChildProcess package ,while running __predict(), it is taking too long and end with code_1 error.
Here I share all related code to decode the issue :
Model python code -->
import keras
import time
start = time.time()
encoder = keras.models.load_model('enc', compile = False)
decoder = keras.models.load_model('dec', compile = False)
import numpy as np
from flask import Flask, request, jsonify , render_template
import tensorflow as tf
import pickle
import string
import re
from keras_preprocessing.sequence import pad_sequences
def initialize_hidden_state():
return tf.zeros((1, 1024))
eng_tokenizer , hin_tokenizer = pickle.load( open('tokenizer.pkl','rb'))
def clean(text):
text = text.lower()
special_char = set(string.punctuation+'।') # Set of all special characters
# Remove all the special characters
text = ''.join(word for word in text if word not in special_char)
seq = eng_tokenizer.texts_to_sequences([text])
seq = pad_sequences(seq, maxlen=23, padding='post')
return seq
def __predict(data):
# Get the data from the POST request.
#data = request.get_json(force=True)
clean_input = clean(data)
# Make prediction using model loaded from disk as per the data.
hidden_enc = initialize_hidden_state()
enc_out, enc_hidden = encoder(clean_input, hidden_enc)
result = ''
dec_hidden = enc_hidden
dec_input = tf.expand_dims(hin_tokenizer.texts_to_sequences(['<Start>'])[0], 0)
#------------------------------------------------------------------
for t in range(25):
predictions, dec_hidden, attention_weights = decoder(dec_input, dec_hidden, enc_out)
predicted_id = tf.argmax(predictions[0]).numpy()
x = hin_tokenizer.sequences_to_texts([[predicted_id]])[0]
if x == 'end':
break
result += x + ' '
# the predicted ID is fed back into the model
dec_input = tf.expand_dims([predicted_id], 0)
CLEANR = re.compile(r"([A-Za-z])", re.DOTALL)
result = re.sub(CLEANR, '', result)
return result
# import json
# with open('data.json', 'r') as openfile:
# json_object = json.load(openfile).get('data')
data =__predict("file")
end= time.time()
# print(start-end)
data1 = data +"abcd"
print(data1)
# print("abcd")
# dictionary = {
# "data": data,
# }
# json_object = json.dumps(dictionary, indent=2)
# with open("result.json", "w") as outfile:
# outfile.write(json_object)
When I type print("abcd") or print(start-end), it is giving result ,ending with code_0. But when I type print("data") not giving any result and ending with code_1 .
Here is the childProcess code -->
app.get('/', (req, res) => {
let dataToSend
let largeDataSet = []
// spawn new child process to call the python script
const python = spawn('python', ['app.py'])
// console.log(python);
// collect data from script
python.stdout.on('data', function (data) {
console.log('Pipe data from python script ...')
//dataToSend = data;
largeDataSet.push(data)
})
// in close event we are sure that stream is from child process is closed
python.on('close', (code) => {
console.log(`child process close all stdio with code ${code}`)
// send data to browser
// largeDataSet = []
console.log(largeDataSet.join(''));
res.send(largeDataSet.join(''))
})
})
Here is the error --->
child process close all stdio with code 1
Pls help , I tried to understand the problem but failed severely even in understanding it.
Thanks in advance !!!
I am looking at emit_rule example in bazel source tree:
https://github.com/bazelbuild/examples/blob/5a8696429e36090a75eb6fee4ef4e91a3413ef13/rules/shell_command/rules.bzl
I want to add a data dependency to the custom rule. My understanding of dependency attributes documentation calls for data attr label_list to be used, but it does not appear to work?
# This example copied from docs
def _emit_size_impl(ctx):
in_file = ctx.file.file
out_file = ctx.actions.declare_file("%s.pylint" % ctx.attr.name)
ctx.actions.run_shell(
inputs = [in_file],
outputs = [out_file],
command = "wc -c '%s' > '%s'" % (in_file.path, out_file.path),
)
return [DefaultInfo(files = depset([out_file]),)]
emit_size = rule(
implementation = _emit_size_impl,
attrs = {
"file": attr.label(mandatory = True,allow_single_file = True,),
"data": attr.label_list(allow_files = True),
# ^^^^^^^ Above does not appear to be sufficient to copy data dependency into sandbox
},
)
With this rule emit_size(name = "my_name", file = "my_file", data = ["my_data"]) I want to see my_data copied to bazel-out/ before running the command. How do I go about doing this?
The data files should be added as inputs to the actions that need those files, e.g. something like this:
def _emit_size_impl(ctx):
in_file = ctx.file.file
out_file = ctx.actions.declare_file("%s.pylint" % ctx.attr.name)
ctx.actions.run_shell(
inputs = [in_file] + ctx.files.data,
outputs = [out_file],
# For production rules, probably should use ctx.actions.run() and
# ctx.actions.args():
# https://bazel.build/rules/lib/Args
command = "echo data is: ; %s ; wc -c '%s' > '%s'" % (
"cat " + " ".join([d.path for d in ctx.files.data]),
in_file.path, out_file.path),
)
return [DefaultInfo(files = depset([out_file]),)]
emit_size = rule(
implementation = _emit_size_impl,
attrs = {
"file": attr.label(mandatory = True, allow_single_file = True,),
"data": attr.label_list(allow_files = True),
},
)
BUILD:
load(":defs.bzl", "emit_size")
emit_size(
name = "size",
file = "file.txt",
data = ["data1.txt", "data2.txt"],
)
$ bazel build size
INFO: Analyzed target //:size (4 packages loaded, 9 targets configured).
INFO: Found 1 target...
INFO: From Action size.pylint:
data is:
this is data
this is other data
Target //:size up-to-date:
bazel-bin/size.pylint
INFO: Elapsed time: 0.323s, Critical Path: 0.02s
INFO: 2 processes: 1 internal, 1 linux-sandbox.
INFO: Build completed successfully, 2 total actions
$ cat bazel-bin/size.pylint
22 file.txt
I'm trying to convert a predicted RasterFrameLayer in RasterFrames into a GeoTiff file after training a machine learning model.
When using the demo data Elkton-VA from rasterframes,it works fine.
But when using one cropping sentinel 2a tif with ndvi indice (normalized from -1000 to 1000), it failed with NullPointedException in toRaster step.
Feel like it's due to nodata value outside the ROI.
The test data is here, geojson and log.
Geotrellis version:3.3.0
Rasterframes version:0.9.0
import geotrellis.proj4.LatLng
import geotrellis.raster._
import geotrellis.raster.io.geotiff.{MultibandGeoTiff, SinglebandGeoTiff}
import geotrellis.raster.io.geotiff.reader.GeoTiffReader
import geotrellis.raster.render.{ColorRamps, Png}
import org.apache.spark.ml.Pipeline
import org.apache.spark.ml.classification.DecisionTreeClassifier
import org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
import org.apache.spark.ml.feature.VectorAssembler
import org.apache.spark.ml.tuning.{CrossValidator, ParamGridBuilder}
import org.apache.spark.sql._
import org.locationtech.rasterframes._
import org.locationtech.rasterframes.ml.{NoDataFilter, TileExploder}
object ClassificiationRaster extends App {
def readTiff(name: String) = GeoTiffReader.readSingleband(getClass.getResource(s"/$name").getPath)
def readMtbTiff(name: String): MultibandGeoTiff = GeoTiffReader.readMultiband(getClass.getResource(s"/$name").getPath)
implicit val spark = SparkSession.builder()
.master("local[*]")
.appName(getClass.getName)
.withKryoSerialization
.getOrCreate()
.withRasterFrames
import spark.implicits._
val filenamePattern = "xiangfuqu_202003_mask_%s.tif"
val bandNumbers = "ndvi".split(",").toSeq
val bandColNames = bandNumbers.map(b ⇒ s"band_$b").toArray
val tileSize = 256
val joinedRF: RasterFrameLayer = bandNumbers
.map { b ⇒ (b, filenamePattern.format(b)) }
.map { case (b, f) ⇒ (b, readTiff(f)) }
.map { case (b, t) ⇒ t.projectedRaster.toLayer(tileSize, tileSize, s"band_$b") }
.reduce(_ spatialJoin _)
.withCRS()
.withExtent()
val tlm = joinedRF.tileLayerMetadata.left.get
// println(tlm.totalDimensions.cols)
// println(tlm.totalDimensions.rows)
joinedRF.printSchema()
val targetCol = "label"
val geojsonPath = "/Users/ethan/work/data/L2a10m4326/zds/test.geojson"
spark.sparkContext.addFile(geojsonPath)
import org.locationtech.rasterframes.datasource.geojson._
val jsonDF: DataFrame = spark.read.geojson.load(geojsonPath)
val label_df: DataFrame = jsonDF
.select($"CLASS_ID", st_reproject($"geometry",LatLng,LatLng).alias("geometry"))
.hint("broadcast")
val df_joined = joinedRF.join(label_df, st_intersects(st_geometry($"extent"), $"geometry"))
.withColumn("dims",rf_dimensions($"band_ndvi"))
val df_labeled: DataFrame = df_joined.withColumn(
"label",
rf_rasterize($"geometry", st_geometry($"extent"), $"CLASS_ID", $"dims.cols", $"dims.rows")
)
df_labeled.printSchema()
val tmp = df_labeled.filter(rf_tile_sum($"label") > 0).cache()
val exploder = new TileExploder()
val noDataFilter = new NoDataFilter().setInputCols(bandColNames :+ targetCol)
val assembler = new VectorAssembler()
.setInputCols(bandColNames)
.setOutputCol("features")
val classifier = new DecisionTreeClassifier()
.setLabelCol(targetCol)
.setFeaturesCol(assembler.getOutputCol)
val pipeline = new Pipeline()
.setStages(Array(exploder, noDataFilter, assembler, classifier))
val evaluator = new MulticlassClassificationEvaluator()
.setLabelCol(targetCol)
.setPredictionCol("prediction")
.setMetricName("f1")
val paramGrid = new ParamGridBuilder()
//.addGrid(classifier.maxDepth, Array(1, 2, 3, 4))
.build()
val trainer = new CrossValidator()
.setEstimator(pipeline)
.setEvaluator(evaluator)
.setEstimatorParamMaps(paramGrid)
.setNumFolds(4)
val model = trainer.fit(tmp)
val metrics = model.getEstimatorParamMaps
.map(_.toSeq.map(p ⇒ s"${p.param.name} = ${p.value}"))
.map(_.mkString(", "))
.zip(model.avgMetrics)
metrics.toSeq.toDF("params", "metric").show(false)
val scored = model.bestModel.transform(joinedRF)
scored.groupBy($"prediction" as "class").count().show
scored.show(20)
val retiled: DataFrame = scored.groupBy($"crs", $"extent").agg(
rf_assemble_tile(
$"column_index", $"row_index", $"prediction",
tlm.tileCols, tlm.tileRows, IntConstantNoDataCellType
)
)
val rf: RasterFrameLayer = retiled.toLayer(tlm)
val raster: ProjectedRaster[Tile] = rf.toRaster($"prediction", 5848, 4189)
SinglebandGeoTiff(raster.tile,tlm.extent, tlm.crs).write("/Users/ethan/project/IdeaProjects/learn/spark_ml_learn.git/src/main/resources/easy_b1.tif")
val clusterColors = ColorRamp(
ColorRamps.Viridis.toColorMap((0 until 1).toArray).colors
)
// val pngBytes = retiled.select(rf_render_png($"prediction", clusterColors)).first //It can output the png.
// retiled.tile.renderPng(clusterColors).write("/Users/ethan/project/IdeaProjects/learn/spark_ml_learn.git/src/main/resources/classified2.png")
// Png(pngBytes).write("/Users/ethan/project/IdeaProjects/learn/spark_ml_learn.git/src/main/resources/classified2.png")
spark.stop()
}
I suspect there is a bug in the way the toLayer extension method is working. I will follow up with a bug report to RasterFrames project. That will take a little more effort I suspect.
Here is a possible workaround that is a little bit lower level. In this case it results in 25 non-overlapping GeoTiffs written out.
import geotrellis.store.hadoop.{SerializableConfiguration, _}
import geotrellis.spark.Implicits._
import org.apache.hadoop.fs.Path
// Need this to write local files from spark
val hconf = SerializableConfiguration(spark.sparkContext.hadoopConfiguration)
ContextRDD(
rf.toTileLayerRDD($"prediction")
.left.get
.filter{
case (_: SpatialKey, null) ⇒ false // remove any null Tiles
case _ ⇒ true
},
tlm)
.regrid(1024) //Regrid the Tiles so that they are 1024 x 1024
.toGeoTiffs()
.foreach{ case (sk: SpatialKey, gt: SinglebandGeoTiff) ⇒
val path = new Path(new Path("file:///tmp/output"), s"${sk.col}_${sk.row}.tif")
gt.write(path, hconf.value)
}
I am using Q learning and the program should be able to play the game after some tries but it is not learning even when the epsilon value if 0.1.
I have tried changing the batch size the memory size. I have changed the code to give -1 reward if the player dies.
import gym
import numpy as np
import random
import tensorflow as tf
import numpy as np
from time import time
import keyboard
import sys
import time
env = gym.make("Breakout-ram-v4")
observationSpace = env.observation_space
actionSpace= env.action_space
episode = 500
class Model_QNN :
def __init__(self):
self.memory = []
self.MAX_MEMORY_TO_USE = 60_000
self.gamma = 0.9
self.model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(128,1)),
tf.keras.layers.Dense(256,activation="relu"),
tf.keras.layers.Dense(64,activation="relu"),
tf.keras.layers.Dense(actionSpace.n , activation= "softmax")
])
self.model.compile(optimizer="adam",loss="mse",metrics=["accuracy"])
def remember(self, steps , done):
self.memory.append([steps,done])
if(len(self.memory) >= self.MAX_MEMORY_TO_USE):
del self.memory[0]
def replay(self,batch_size= 32):
states, targets_f = [], []
if(len(self.memory)< batch_size) :
return
else:
mini = random.sample(self.memory,batch_size)
states ,targets = [], []
for steps , done in mini :
target= steps[2] ;
if not done :
target = steps[2] + (self.gamma* np.amax(self.model.predict(steps[3].reshape(1,128,1))[0]))
target_f = self.model.predict(steps[0].reshape(1,128,1))
target_f[0][steps[1]] = target
states.append(steps[0])
targets.append(target_f[0])
self.model.fit(np.array(states).reshape(len(states),128,1), np.array(targets),verbose=0,epochs=10)
def act(self,state,ep):
if(random.random()< ep):
action = actionSpace.sample()
else :
np.array([state]).shape
action= self.model.predict(state.reshape(1,128,1))
action = np.argmax(action)
return action;
def saveModel (self):
print("Saving")
self.model.save("NEWNAMEDONE")
def saveBackup(self,num):
self.model.save("NEWNAME"+str(int(num)))
def main():
agent= Model_QNN();
epsilon=0.9
t_end = time.time()
score= 0
for e in range(2000):
print("Working on episode : "+str(e)+" eps "+str(epsilon)+" Score " + str(score))
preState = env.reset()
preState,reward,done,_ = env.step(1)
mainLife=5
done = False
score= 0
icount = 0
render=False
if e % 400 ==0 and not e==0:
render =True
while not done:
icount+=1
if render:
env.render()
if keyboard.is_pressed('q'):
agent.saveBackup(100)
agent.saveModel()
quit()
rewrd=0
if ( _["ale.lives"] < mainLife ):
mainLife-=1
rewrd=-1
action=1
else:
action = agent.act(preState,epsilon)
newState,reward,done,_ = env.step(action)
if rewrd ==-1 :
reward =-1
agent.remember([preState/255,action,reward,newState/255],done);
preState= newState;
score+=reward
if done :
break
agent.replay(1024)
if epsilon >= 0.18 :
epsilon = epsilon * 0.995;
if ((e+1)%500==0):
agent.saveBackup((e+1)/20)
agent.saveModel()
if __name__=='__main__':
main()
There is no error message the program should learn and it is not
Why are you using Softmax on your output layer?
If you want to use Softmax use Cross-Entropy as your loss. However, it looks like you're trying to implement a value based learning system. The activation function on your output layer should be linear.
I suggest you try your implementation on Cartpole-v0 then LunarLanding-v2 first.
Those are solved environments and a great place to sanity check your code.
"There is no error message the program should learn and it is not."
Welcome to ML where things fail silently.
I'm attempting to build a ROM-based Window function using DSPComplex and FixedPoint types, but seem to keep running into the following error:
chisel3.core.Binding$ExpectedHardwareException: vec element 'dsptools.numbers.DspComplex#32' must be hardware, not a bare Chisel type
The source code for my attempt at this looks like the following:
class TaylorWindow(len: Int, window: Seq[FixedPoint]) extends Module {
val io = IO(new Bundle {
val d_valid_in = Input(Bool())
val sample = Input(DspComplex(FixedPoint(16.W, 8.BP), FixedPoint(16.W, 8.BP)))
val windowed_sample = Output(DspComplex(FixedPoint(24.W, 8.BP), FixedPoint(24.W, 8.BP)))
val d_valid_out = Output(Bool())
})
val win_coeff = Vec(window.map(x=>DspComplex(x, FixedPoint(0, 16.W, 8.BP))).toSeq) // ROM storing our coefficients.
io.d_valid_out := io.d_valid_in
val counter = Reg(UInt(10.W))
// Implicit reset
io.windowed_sample:= io.sample * win_coeff(counter)
when(io.d_valid_in) {
counter := counter + 1.U
}
}
println(getVerilog(new TaylorWindow(1024, fp_seq)))
I'm actually reading the coefficients in from a file (this particular window has a complex generation function that I'm doing in Python elsewhere) with the following sequence of steps
val filename = "../generated/taylor_coeffs"
val coeff_file = Source.fromFile(filename).getLines
val double_coeffs = coeff_file.map(x => x.toDouble)
val fp_coeffs = double_coeffs.map(x => FixedPoint.fromDouble(x, 16.W, 8.BP))
val fp_seq = fp_coeffs.toSeq
Does this mean the DSPComplex type isn't able to be translated to Verilog?
Commenting out the win_coeff line seems to make the whole thing generate (but clearly doesn't do what I want it to do)
I think you should try using
val win_coeff = VecInit(window.map(x=>DspComplex.wire(x, FixedPoint.fromDouble(0.0, 16.W, 8.BP))).toSeq) // ROM storing our coefficients.
which will create hardware values like you want. The Vec just creates a Vec of the type specfied