error: not valid array Param - future

I want to use pyomo.environ import * and have this code:
## Define sets ##
model.i = Set(initialize=i_set)
model.p = Set(initialize=j_set)
## Define parameters ##
model.precedence = Param (model.i, initialize=precedence, doc='precedence relationship')
model.duration = Param (model.p,initialize=duration,doc='duration')
## Define variables ##
model.x = Var(model.i, within=NonNegativeReals)
model.z = Var(within=NonNegativeReals)
but i gives me this error:
ERROR: Constructing component 'duration' from data=None failed:
KeyError: "Error setting parameter value: Index '42' is not valid for array Param 'duration'"
Traceback (most recent call last):
File "D:\0.- MS CM UH\3.- Data Analysis in CM\hwk#7\Exercise 9.5.py", line 33, in <module>
model.duration = Param(model.p,initialize=duration,doc='duration')
I've already define the parameter Param. I don't know why i have this error.

What is your definition for the duration initializer? How about the j_set? The error is almost certainly because the duration object (a dict?) has a key (42) that is not in the j_set used to initialize model.p

Related

TypeError: raw_parse() missing 1 required positional argument: 'sentence'

I tried this code below:
from nltk.parse.corenlp import CoreNLPParser
sdp = nltk.parse.corenlp.CoreNLPDependencyParser
result = list(sdp.raw_parse(sentence))
But I get this error
TypeError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_2712/97028793.py in
----> 1 result = list(sdp.raw_parse(sentence)) TypeError: raw_parse() missing 1 required positional argument: 'sentence'
What should I do?

Getting "attempt to index a nil value error" when attempting to create objects in Lua

I'm putting some code into a module so I can draw and maintain multiple copies. I'm getting this common error but I can't see why. I understand what it's saying to a basic level, but as I'm able to see a print out from the table being created, I don't understand why calling a function that module contains would throw this error.
I've read through all the answers on SO, but I'm still at a loss. I've tried printing out at various stages to see where the issue is, everything works as if I had created an instance of the module, but the error persists.
Code below is cleaned of extraneous stuff.
local orbitalCircle = include('lib/orbital_circle')
function init()
c1 = orbitalCircle.new(20, 42, 18, 1.7, 16, 62, 15, c1Sequence)
<-- at this point print code from the module's init function works
c1:doFunc(param) <-- this will call the error
The module:
local Orbital_Circle = {}
-- set up variables
local some Vars Are here
function Orbital_Circle.new(x, y, diameter, scale_factor, number_of_notes, beats_per_second, frames_per_second, sequence_data)
print("Orbital_Circle running")
end
function Orbital_Circle:doFunc(param)
self.var = param <-- I update a local var here
print("self.var") <-- I then print the updated number for sanity checking
end
return Orbital_Circle
I expect the var in my instance of this module to update and the functions code to run, but... no joy. I get the error.
Cheers.
I'm putting some code into a module so I can draw and maintain multiple copies.
I think there's a bit of a misunderstanding about how Lua modules work here. It's an easy mistake to make.
When you require a module in Lua, each subsequent require of the same file refers to the same code. So (eg) these two variables contain exactly the same code:
local orbitalCircle1 = require('lib/orbital_circle')
local orbitalCircle2 = require('lib/orbital_circle')
Which means that you can't use Lua modules by themselves to create OOP type objects as you are trying to do. Your new function must return something that can be used like an instance of a class, a unique table for each call:
local Orbital_Circle = {}
local shared_variable = 1
function Orbital_Circle.new(x, y)
-- create unique table
local obj = {}
-- access these from table/object methods with self.xxx
obj.x = x or 0
obj.y = y or 0
obj.var = "initial value"
-- now define functions with an explicit 'self' parameter...
function obj.doFunc(self, param)
self.var = self.var .. " " .. param
shared_variable = shared_variable + 1
end
-- ... or with the syntactic 'self' sugar, ':'
function obj:printVars()
print("self.var = " .. self.var)
print("shared_variable = " .. shared_variable)
print("self.x = " .. self.x)
end
return obj
end
return Orbital_Circle
You can also define the methods as local functions outside the new function that have self parameter and have a list of entries such as:
obj.anotherMethod = functionDeclaredAtTopOfFile
… to keep things tidier, if you like.
Your code is completely messed up.
<-- will cause an error for unexpected symbol.
c1 = orbitalCircle.new(20, 42, 18, 1.7, 16, 62, 15, c1Sequence)
will give you an error for indexing a global nil value c1 because orbitalCircle.new has no return value.
your init function is incomplete and you don't call it so the provided code does not do anything even if you fix the above errors.
The reported error is not caused by any line of code you provided here.
Code below is cleaned of extraneous stuff.
I'm afraid you removed too much.
The error message tells you that you're indexing local n, a nil value from within a local function that has been defined in n's scope.
This code for example:
local n
function test()
local b = n.a
end
test()
would result in the error message:
input:3: attempt to index a nil value (upvalue 'n')
n is an upvalue for test because it is a local variable defined outside the functions body, but not a global variable.

'NoneType' object has no attribute 'add_summary'

I'm having trouble with visualizing the weights and bias of my model using tensorboardX.
Here is my model (it's pretty simple anyway):
self.pipe = nn.Sequential(nn.Linear(9, 128),
nn.ReLU(),
nn.Linear(128, 256),
nn.ReLU(),
nn.Linear(256,2),
nn.Softmax()
)
def forward(self, x):
return self.pipe(x)
And here is where I get error in pytorch
for name, param in net.named_parameters():
writer.add_histogram(name, param, epoch_size, bins='auto')
and the error is
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-70-d060d2df4423> in <module>()
1 for name, param in net.named_parameters():
----> 2 writer.add_histogram(name, param, epoch_size, bins='auto')
~\Anaconda3\lib\site-packages\tensorboardX\writer.py in add_histogram(self, tag, values, global_step, bins, walltime)
403 if isinstance(bins, six.string_types) and bins == 'tensorflow':
404 bins = self.default_bins
--> 405 self.file_writer.add_summary(
406 histogram(tag, values, bins), global_step, walltime)
407
AttributeError: 'NoneType' object has no attribute 'add_summary'
but I really have to see the histogram where the weights stuck in suboptimal.
so I changed code little bit to proceed step by step
param = np.array(list(net.parameters()))
print(param[0].data)
writer.add_histogram('weight', param[0].data)
BOOM! still same error, maybe that doesn't change at all.
The posted code snippet is insufficient to root cause the issue.
The member variable file_writer is set to None when the close() method is invoked on writer. Please check if the close() method was invoked on writer. The close() method is also invoked when the writer object is used as a Context manager and the with block is exited.
with SummaryWriter() as writer:
writer.add_scalar...
writer.add_histogram # this will cause a crash

attempt to call method 'random' (a nil value) in Lua

Below is my code
require 'dpnn'
require 'cunn'
local deviceNumber = tonumber(os.getenv("CUDA_CARD_NR"))
cutorch.setDevice(deviceNumber)
local module = nn.Sequential():cuda()
module:add(nn.Linear(2,1):cuda())
module:add(nn.Sigmoid():cuda())
criterion = nn.BCECriterion():cuda() -- Binary Cross Entorpy Criteria
local targets = torch.CudaTensor(10):random(0,1)
local inputs = torch.CudaTensor(10,2):uniform(-1,1)
function trainEpoch(module,criterion,inputs,targets)
for i=1,inputs:size(1) do
local idx = math.random(1,inputs:size(1))
local input, target = inputs[idx], targets:narrow(1,idx,1)
-- forward
local output= module:forward(input)
local loss= criterion:forward(output,target)
-- backward
local gradOutput = criterion:backward(output,target)
module:zeroGradParameters()
local gradInput = module:backward(input,gradOutput)
--update
module:updateGradParameters(0.9) -- momentum
module:updateParameters(0.1) -- W = W -0.1*dL/dW
end
end
for i=1,100 do
trainEpoch(module,criterion,inputs,targets)
end
I am running above using the following command
CUDA_CARD_NR=1 luajit feedforwad.lua
It gives the following error
luajit: feedforwad.lua:13: attempt to call method 'random' (a nil value)
stack traceback:
feedforwad.lua:13: in main chunk
[C]: at 0x004064f0
I know that there is some error in the line
local targets = torch.CudaTensor(10):random(0,1)
But I am not able to figure out.
luajit: feedforwad.lua:13: attempt to call method 'random' (a nil
value)
Is not "some error" and you should not have problems to figure out what is wrong because the error message tells you exactly what is wrong.
You tried to call the a method named random which happens to be a nil value.
This means that there is no function with that name and therefor you can't call it.
According to the reference documentation (which you should have checked befor coming here) the function is actually named rand

Temporarily modify the current process's environment

I use the following code to temporarily modify environment variables.
#contextmanager
def _setenv(**mapping):
"""``with`` context to temporarily modify the environment variables"""
backup_values = {}
backup_remove = set()
for key, value in mapping.items():
if key in os.environ:
backup_values[key] = os.environ[key]
else:
backup_remove.add(key)
os.environ[key] = value
try:
yield
finally:
# restore old environment
for k, v in backup_values.items():
os.environ[k] = v
for k in backup_remove:
del os.environ[k]
This with context is mainly used in test cases. For example,
def test_myapp_respects_this_envvar():
with _setenv(MYAPP_PLUGINS_DIR='testsandbox/plugins'):
myapp.plugins.register()
[...]
My question: is there a simple/elegant way to write _setenv? I thought about actually doing backup = os.environ.copy() and then os.environ = backup .. but I am not sure if that would affect the program behavior (eg: if os.environ is referenced elsewhere in the Python interpreter).
I suggest you the following implementation:
import contextlib
import os
#contextlib.contextmanager
def set_env(**environ):
"""
Temporarily set the process environment variables.
>>> with set_env(PLUGINS_DIR=u'test/plugins'):
... "PLUGINS_DIR" in os.environ
True
>>> "PLUGINS_DIR" in os.environ
False
:type environ: dict[str, unicode]
:param environ: Environment variables to set
"""
old_environ = dict(os.environ)
os.environ.update(environ)
try:
yield
finally:
os.environ.clear()
os.environ.update(old_environ)
EDIT: more advanced implementation
The context manager below can be used to add/remove/update your environment variables:
import contextlib
import os
#contextlib.contextmanager
def modified_environ(*remove, **update):
"""
Temporarily updates the ``os.environ`` dictionary in-place.
The ``os.environ`` dictionary is updated in-place so that the modification
is sure to work in all situations.
:param remove: Environment variables to remove.
:param update: Dictionary of environment variables and values to add/update.
"""
env = os.environ
update = update or {}
remove = remove or []
# List of environment variables being updated or removed.
stomped = (set(update.keys()) | set(remove)) & set(env.keys())
# Environment variables and values to restore on exit.
update_after = {k: env[k] for k in stomped}
# Environment variables and values to remove on exit.
remove_after = frozenset(k for k in update if k not in env)
try:
env.update(update)
[env.pop(k, None) for k in remove]
yield
finally:
env.update(update_after)
[env.pop(k) for k in remove_after]
Usage examples:
>>> with modified_environ('HOME', LD_LIBRARY_PATH='/my/path/to/lib'):
... home = os.environ.get('HOME')
... path = os.environ.get("LD_LIBRARY_PATH")
>>> home is None
True
>>> path
'/my/path/to/lib'
>>> home = os.environ.get('HOME')
>>> path = os.environ.get("LD_LIBRARY_PATH")
>>> home is None
False
>>> path is None
True
EDIT2
A demonstration of this context manager is available on GitHub.
_environ = dict(os.environ) # or os.environ.copy()
try:
...
finally:
os.environ.clear()
os.environ.update(_environ)
I was looking to do the same thing but for unit testing, here is how I have done it using the unittest.mock.patch function:
def test_function_with_different_env_variable():
with mock.patch.dict('os.environ', {'hello': 'world'}, clear=True):
self.assertEqual(os.environ.get('hello'), 'world')
self.assertEqual(len(os.environ), 1)
Basically using unittest.mock.patch.dict with clear=True, we are making os.environ as a dictionary containing solely {'hello': 'world'}.
Removing the clear=True will let the original os.environ and add/replace the specified key/value pair inside {'hello': 'world'}.
Removing {'hello': 'world'} will just create an empty dictionary, os.envrion will thus be empty within the with.
In pytest you can temporarily set an environment variable using the monkeypatch fixture. See the docs for details. I've copied a snippet here for your convenience.
import os
import pytest
from typing import Any, NewType
# Alias for the ``type`` of monkeypatch fixture.
MonkeyPatchFixture = NewType("MonkeyPatchFixture", Any)
# This is the function we will test below to demonstrate the ``monkeypatch`` fixture.
def get_lowercase_env_var(env_var_name: str) -> str:
"""
Return the value of an environment variable. Variable value is made all lowercase.
:param env_var_name:
The name of the environment variable to return.
:return:
The value of the environment variable, with all letters in lowercase.
"""
env_variable_value = os.environ[env_var_name]
lowercase_env_variable = env_variable_value.lower()
return lowercase_env_variable
def test_get_lowercase_env_var(monkeypatch: MonkeyPatchFixture) -> None:
"""
Test that the function under test indeed returns the lowercase-ified
form of ENV_VAR_UNDER_TEST.
"""
name_of_env_var_under_test = "ENV_VAR_UNDER_TEST"
env_var_value_under_test = "EnvVarValue"
expected_result = "envvarvalue"
# KeyError because``ENV_VAR_UNDER_TEST`` was looked up in the os.environ dictionary before its value was set by ``monkeypatch``.
with pytest.raises(KeyError):
assert get_lowercase_env_var(name_of_env_var_under_test) == expected_result
# Temporarily set the environment variable's value.
monkeypatch.setenv(name_of_env_var_under_test, env_var_value_under_test)
assert get_lowercase_env_var(name_of_env_var_under_test) == expected_result
def test_get_lowercase_env_var_fails(monkeypatch: MonkeyPatchFixture) -> None:
"""
This demonstrates that ENV_VAR_UNDER_TEST is reset in every test function.
"""
env_var_name_under_test = "ENV_VAR_UNDER_TEST"
expected_result = "envvarvalue"
with pytest.raises(KeyError):
assert get_lowercase_env_var(env_var_name_under_test) == expected_result
For unit testing I prefer using a decorator function with optional parameters. This way I can use the modified environment values for a whole test function. The decorator below also restores the original environment values in case the function raises an Exception:
import os
def patch_environ(new_environ=None, clear_orig=False):
if not new_environ:
new_environ = dict()
def actual_decorator(func):
from functools import wraps
#wraps(func)
def wrapper(*args, **kwargs):
original_env = dict(os.environ)
if clear_orig:
os.environ.clear()
os.environ.update(new_environ)
try:
result = func(*args, **kwargs)
except:
raise
finally: # restore even if Exception was raised
os.environ = original_env
return result
return wrapper
return actual_decorator
Usage in unit tests:
class Something:
#staticmethod
def print_home():
home = os.environ.get('HOME', 'unknown')
print("HOME = {0}".format(home))
class SomethingTest(unittest.TestCase):
#patch_environ({'HOME': '/tmp/test'})
def test_environ_based_something(self):
Something.print_home() # prints: HOME = /tmp/test
unittest.main()

Resources