Why are my images received as string? (ROS) - ros

I already found my mistake. Should I delete this question?
I have a very very simple subscribe node. (Unfortunately searching the internet the usual examples use Strings, although a book of mine uses Ints)
The code is
import rospy
from sensor_msgs.msg import Image
from cv_bridge import CvBridge
import rosbag
def image_callback(msg):
#print(msg.data.header)
print(type(msg.data))
print(len(msg.data))
def image_recorder():
rospy.init_node('image_recorder', anonymous=True)
sub = rospy.Subscriber('image_results',Image, image_callback)
rospy.spin()
if __name__ == '__main__':
try:
image_recorder()
except rospy.ROSInterruptException:
pass
Now, what is the problem?
The output of this is:
<type 'str'>
1184260
Why? The messages that we are receiving are Images, (that is why I try to do msg.data.header and fail!)
How can I recover the images?
And no, I do not need to use CvBridge to convert them to opencv Images. I just need the ROS images

msg is of type Image
therefore msg.header is the correct way to write it

Related

ValueError: [E143] Labels for component 'tagger' not initialized

I've been following this tutorial to create a custom NER. However, I keep getting this error:
ValueError: [E143] Labels for component 'tagger' not initialized. This can be fixed by calling add_label, or by providing a representative batch of examples to the component's initialize method.
This is how I defined the spacy model:
import spacy
from spacy.tokens import DocBin
from tqdm import tqdm
nlp = spacy.blank("ro") # load a new spacy model
source_nlp = spacy.load("ro_core_news_lg")
nlp.tokenizer.from_bytes(source_nlp.tokenizer.to_bytes())
nlp.add_pipe("tagger", source=source_nlp)
doc_bin = DocBin() # create a DocBin object
I just meet the same problem. The picture of setting the config file is misleading you.
If you just want to run through the tutrital, you can set the config file like this.
only click the check box on ner

TextMobject doesn't work on Jupyter-Manim

I am currently using jupyter-manim since it is the most efficient way for me to use manim. I'm running my code on Kaggle and every time I use TextMobject in manim, it outputs an error that says Latex error converting to dvi. See log output above or the log file: media/Tex/54dfbfee288272f0.log. I've tried TexMobject and Text function, but only the Text function works. The Text function is limited however, and I'm not sure how to change the font. Is there a way to fix this or is it something that comes with using jupyter-manim? It seems that all the other functions work such as drawing shapes, animating scenes, etc.
%%manim
class Text(Scene):
def construct(self):
first_line = TextMobject('Hi')
second_line = TexMobject('Hi')
#Only one that works
third_line = Text('Hi')
I tried your Manim program and it worked as expected for me. I would try making sure
include from manimlib.imports import * in your first line (importing Manim library)
include self.play(...) so you can see them
I think you already have these, but I'm putting them in case you don't.
You may also be getting the error because you do not have a LaTeX distribution installed on your system (i.e. MikTex or Texlive).
I think part of your problem may be the name of the class you chose. I had problems with your code until I changed the name from Text to TextTest. Here is a minimally working example that works fine in my Jupyter notebook (after running import jupyter_manim of course).
%%manim TextTest -p -ql
from manim import *
class TextTest(Scene):
def construct(self):
first_line = TextMobject('Hi 1')
second_line = TexMobject('Hi 2').shift(DOWN)
third_line = Text('Hi 3').shift(UP)
self.add(first_line)
self.add(second_line)
self.add(third_line)
self.wait(1)
Also, you should be aware that TextMobject and TexMobject have been deprecated.

Using python to parse twitter url

I am using the following code but I am not able to extract any information from the url.
from urllib.parse import urlparse
if __name__ == "__main__":
z = 5
url = 'https://twitter.com/isro/status/1170331318132957184'
df = urlparse(url)
print(df)
ParseResult(scheme='https', netloc='twitter.com', path='/isro/status/1170331318132957184', params='', query='', fragment='')
I was hoping to extract the tweet message, time of tweet and other information available from the link but the code above clearly doesn't achieve that. How do I go about it from here ?
print(df)
ParseResult(scheme='https', netloc='twitter.com', path='/isro/status/1170331318132957184', params='', query='', fragment='')
I think you may be misunderstanding the purpose of the urllib parseurl function. From the Python documentation:
urllib.parse.urlparse(urlstring, scheme='', allow_fragments=True)
Parse a URL into six components, returning a 6-item named tuple. This
corresponds to the general structure of a URL:
scheme://netloc/path;parameters?query#fragment
From the result you are seeing in ParseResult, your code is working perfectly - it is breaking your URL up into the component parts.
It sounds as though you actually want to fetch the web content at that URL. In that case, I might take a look at urllib.request.urlopen instead.

How can I insert a file into another file when using the Spyder IDE?

When editing a file using the Spyder IDE editor I want to add the contents of another file, similar to what Emacs ctrl-x i does. For example:
main.py
import sys
def main():
help_text = """ ### external file contents go here ###"""
print(help_text)
if __name__ == '__main__':
sys.exit(main())
insertme.txt
Help text someone else gave me.
My desired result is a main.py looking like below (after file insertion and a little clean up):
import sys
def main():
help_text = """Help text someone else gave me."""
print(help_text)
if __name__ == '__main__':
sys.exit(main())
Going through help, online searches, etc. I can't find any direct way to do this (obviously I can do it other ways, but they are more time consuming). Is something like this directly possible with Spyder? If so, how?
(Spyder maintainer here) This is not possible in our editor, sorry.

Latex output from sympy does not correctly display in Google Colaboratory Jupyter notebooks

I am using Google's Colaboratory platform to run python in a Jupyter notebook. In standard Jupyter notebooks, the output of sympy functions is correctly typeset Latex, but the Colaboratory notebook just outputs the Latex, as in the following code snippet:
import numpy as np
import sympy as sp
sp.init_printing(use_unicode=True)
x=sp.symbols('x')
a=sp.Integral(sp.sin(x)*sp.exp(x),x);a
results in Latex output like this:
$$\int e^{x} \sin{\left (x \right )}\, dx$$
The answer cited in these questions, Rendering LaTeX in output cells in Colaboratory and LaTeX equations do not render in google Colaboratory when using IPython.display.Latex doesn't fix the problem. While it provides a method to display Latex expressions in the output of a code cell, it doesn't fix the output from the built-in sympy functions.
Any suggestions on how to get sympy output to properly render? Or is this a problem with the Colaboratory notebook?
I have just made this code snippet to make sympy works like a charm in colab.research.googlr.com !!!
def custom_latex_printer(exp,**options):
from google.colab.output._publish import javascript
url = "https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.3/latest.js?config=default"
javascript(url=url)
return sympy.printing.latex(exp,**options)
init_printing(use_latex="mathjax",latex_printer=custom_latex_printer)
Put it after you imported sympy
This one basically tell sympy to embed mathjax library using colab api before they actually output any syntax.
You need to include MathJax library before display. Set it up in a cell like this first.
from google.colab.output._publish import javascript
url = "https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.3/latest.js?config=default"
Later, you include javascript(url=url) before displaying:
x=sp.symbols('x')
a=sp.Integral(sp.sin(x)*sp.exp(x),x)
javascript(url=url)
a
Then, it will display correctly.
Using colab's mathjax and setting the configuration file to TeX-MML-AM_HTMLorMML worked for me. Below is the code:
from sympy import init_printing
from sympy.printing import latex
def colab_LaTeX_printer(exp, **options):
from google.colab.output._publish import javascript
url_ = "https://colab.research.google.com/static/mathjax/MathJax.js?"
cfg_ = "config=TeX-MML-AM_HTMLorMML" # "config=default"
javascript(url=url_+cfg_)
return latex(exp, **options)
# end of def
init_printing(use_latex="mathjax", latex_printer=colab_LaTeX_printer)

Resources