I am using Tensorflow 1.0.0 and Python 3.5.
When I try to do:
cell = tf.nn.rnn_cell.BasicRNNCell(state_size)
I get the following error:
AttributeError
<ipython-input-25-41a20d8458a7> in <module>()
1 # Forward pass
2 print(tf.__version__)
--->3 cell = tf.nn.rnn_cell.BasicRNNCell(state_size)
4 states_series, current_state = tf.nn.dynamic_rnn(cell, inputs_series, initial_state = init_state)
AttributeError: module 'tensorflow.python.ops.nn' has no attribute 'rnn_cell'
Can someone help me?
TensorFlow changes a lot of APIs before 1.0.
You'll need to replace tf.nn.rnn_cell.BasicLSTMCell by tf.contrib.rnn.BasicLSTMCell
I have same problem in tensorflow 2.1 and when I used this code:
rnn_cells = tf.nn.rnn_cell.MultiRNNCell(
[lstm_cell(size_layer) for _ in range(num_layers)],
state_is_tuple = False,
)
I faced with this error:
AttributeError: module 'tensorflow_core._api.v2.nn' has no attribute 'rnn_cell'
Finally, I replaced tf.nn.rnn_cell.MultiRNNCell by tf.compat.v1.nn.rnn_cell.MultiRNNCell, and then it worked well.
Please, replace tf.nn.rnn_cell.BasicRNNCell(state_size) by tf.compat.v1.nn.rnn_cell.BasicRNNCell(state_size).
Related
I'm trying to do some basic text inference using the bloom model
from transformers import AutoModelForCausalLM, AutoModel
# checkpoint = "bigscience/bloomz-7b1-mt"
checkpoint = "bigscience/bloom-1b7"
tokenizer = AutoModelForCausalLM.from_pretrained(checkpoint)
model = AutoModel.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto")
# Set the prompt and maximum length
prompt = "This is the prompt text"
max_length = 100000
# Tokenize the prompt
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
# Generate the text
outputs = model.generate(inputs)
result = tokenizer.result(outputs[0])
# Print the generated text
print(result)
I get the error
Traceback (most recent call last):
File "/tmp/pycharm_project_444/bloom.py", line 15, in <module>
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1265, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'BloomForCausalLM' object has no attribute 'encode'
Anyone know what the issue is?
It's running on a remote server
I was trying to use the AutoModelForCausalLM tokenizer instead of the AutoTokenizer.
The AutoModelForCausalLMTokenizer does not have an encode() method
The problem was that you were using a model class to create your tokenizer.
AutoModelForCausalLM loads a model for causal language modelling (LM) but not a tokenizer, as stated in the documentation. (https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoModelForCausalLM) You got that error because the model does not have a method called "encode".
You can use AutoTokenizer to achieve what you want. Also, in huggingface every tokenizer's call is mapped to the encoding method, so you do not have to specify the method call. See below
from transformers import AutoTokenizer, AutoModel
checkpoint = "bigscience/bloom-1b7"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModel.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto")
# Set the prompt and maximum length
prompt = "This is the prompt text"
max_length = 100000
# Tokenize the prompt
inputs = tokenizer("Translate to English: Je t’aime.", return_tensors="pt").to("cuda") # Same as tokenizer.encode(...)
I am currently trying to make a biosfile prior to running maxent but keep getting faced with this error: I wonder if anyone has faced this issue before and fixed it please?
bg <- xyFromCell(dens.ras2speciesocc,
sample(which(!is.na(values(subset(env, 1)))), 10000,
prob=values(dens.ras2speciesocc)[!is.na(values(subset(env, 1)))])) -- this runs fine and then:
enmeval_resultsspeciesocc <- ENMevaluate(speciesocc, env,
method= "randomkfold", kfolds = 10, algorithm='maxent.jar', bg.coords = bg)
Which returns this error:
'Error in ENMevaluate(speciesocc, env, method = "randomkfold", kfolds = 10, :
unused argument (kfolds = 10)'
Does anyone have any idea?
from link:
occ, env, bg.coords, RMvalues, fc, occ.grp, bg.grp, method, bin.output, rasterPreds, clamp, progbar:
These arguments from previous versions are backward-compatible to avoid unnecessary errors for older scripts, but in a later version these arguments will be permanently deprecated.
I was having the same issue and then removed the kfolds argument to see if the default (kfolds=5) would run and I got this message instead:
Error in ENMevaluate(occ, env, method = "randomkfold", algorithm = "maxent.jar", :
Datasets "occs" and "bg" have different column names. Please make them identical and try again.
Then I checked my bg column and it was 'x' and 'y' instead of 'longitude' and 'latitude' that I had for my occ file.
That didn't solve it - it went back to giving me the initial error message - but I am sure it was also contributing to the problem!
BUT THEN,
I looked into RMvalue(regularisation multipliers) and fc (feature class) and added them to the code, like this:
enmeval_results <- ENMevaluate(occ, env, bg.coords = bg, method = 'randomkfold',
RMvalues = seq(0.5, 4, 0.5),
fc = c("L", "LQ", "H", "LQH", "LQHP", "LQHPT"),
algorithm='maxent.jar')
and it worked! Obviously, change the fc and RMvalues to what suits you best.
I hope this works for you too!
Angeliki
I have been trying to use aTSA and Forecast package together and noticed that the Arima() function works but the forecast() give error. Does anyone have a solution for this or encountered this? I am especially trying to use stationary.test() from aTSA and that was the main reason I called the library.
error: Error in forecast(.) : 'object' should be 'Arima' or 'estimate' class estimated from arima() or estimate()
As soon as I removed aTSA, the above worked.
fitArima_CO <- Arima(train_CO, order=c(4,1,1))
fit_CO %>%
forecast() %>%
autoplot() +
autolayer(test_CO, colour = TRUE, series = 'Test Data') +
ylab("Adjusted CO") +
guides(colour=guide_legend(title = "Data Series"), fill=guide_legend(title = "Prediction Interval")) +
scale_color_manual(values=c("gold"))
Unfortunately the aTSA package does not play nicely with other time series packages. In particular, its forecast() function will overwrite the forecast() function from the forecast package.
The stationarity.test() function will do an ADF test by default. You can easily do the same test using the adf.test() from the tseries package.
I've been looking at the documentation for praw and i simply cannot find which method is for looking through all the post.
What I want to do is look through all the post
import ProcessingBot
import Auth
import praw
SETPHRASES = ["python", "bots", "jarmahent", "is proves there was no Global Warming in 1966", "test"]
SETPHRASE = ("This is a bot, ignore this reply")
USERNAME = Auth.pG
def run():
r = praw.Reddit(Auth.app_ua)
print("Signing In")
r.set_oauth_app_info(Auth.app_id, Auth.app_secret, Auth.app_uri)
print("Setting Oauth App Info")
r.refresh_access_information(Auth.app_refresh)
sub = r.get_subreddit("ProcessingImages")
print("Getting SubReddit")
for: #Look through all the post this is where the post finder will be
print("Finished")
return r
while True:
run()
The formatting is a little wrong, I spaced 4 times and pasted and it still didnt work.
This is covered in the example on the documentation front page:
>>> import praw
>>> r = praw.Reddit(user_agent='my_cool_application')
>>> submissions = r.get_subreddit('opensource').get_hot(limit=5)
>>> [str(x) for x in submissions]
Aside from get_hot, there are also methods for rising and top posts.
I've been trying to create a graph using py2neo / neo4j but I'm constantly hitting problems with my script. The latest one being the following...
(bare in mind that i am also new to python. sorry!)
Here is the code:
from py2neo import neo4j, node
graph = neo4j.GraphDatabaseService("http://localhost:7474/db/data/")
graph.clear()
i_word = graph.get_or_create_index(neo4j.Node, "i_word")
i_token = graph.get_or_create_index(neo4j.Node, "i_token")
labels = {"TOKEN"}
properties = {"name": "Ana"}
a_node = node(*labels, **properties)
c_node, = graph.create(a_node)
I'm getting the following error:
... py2neo/neo4j.py", line 237
... TypeError: Cannot cast node from (('TOKEN',), {'name':'Ana'})
Any ideas? many thanks for your time.
rgds,
Pedro
The node function in py2neo 1.6 does not have label support. You can only supply properties for creation and then later add labels. The alternative will be to use a Cypher expression such as:
CREATE n:TOKEN {name:'Ana'}
As a side note, bear in mind also that labels are generally be written in TitleCase rather than UPPER_CASE.