saving figures seaborn to sageplot - latex

I have been trying to save statistical graphs (boxplot, barchart, histogram etc) of some random data generated using Seaborn into LaTeX without saving them into a file first. I use \sageplot[width=8cm][png]{(Python_Graphics_Format)} from SageTex package to do this.
For example, when I draw a Box Plot using Seaborn it generates all kinds of format (name.gcf(), name.show(), name.plot(), name.draw() etc) but Graphics format. Is there any way to do this without using 'name.savefig()' or likes?
Why is it important for me?
I would like to generate a list of predefined sage functions in a separate tex file together with bunch of randomly generated data and input them on top of my TeX code after \maketitle. This way, I will be able to generate multiple problems of similar nature and upload them to the online HW system Ximera.
Here is a code that I took from stackoverflow:
import seaborn as sns
sns.set_style("whitegrid")
tips = sns.load_dataset("tips")
ax = sns.boxplot(x=tips["total_bill"])
Your help is most appreciated.

Related

how to upload report with graph and comments from jupyter notebook in Google Spreadsheet

I have some report, here is a sample to repeat:
In[1]:import pandas as pd
import numpy as np
In[2]:import seaborn as sns
sns.set_theme(style="darkgrid")
# Load an example dataset with long-form data
fmri = sns.load_dataset("fmri")
# Plot the responses for different events and regions
sns.lineplot(x="timepoint", y="signal",
hue="region", style="event",
data=fmri)
Out[2]:
In[3]:fmri
Out[3]:
How can I upload my report from jupyter notebook to google sheets, because I have a task to hand over the work in the file "https://docs.google.com/spreadsheets/...." but there were no restrictions on what is better to do, I saw this only when I tried to upload a link to my github in the google form, now I think how can I convert my results, one of the options is to insert screenshots of my report into a file in google sheets, but maybe there is a better option

Problems plotting time-series interactively with Altair

Description of the problem
My goal is quite basic: to plot time series in an interactive plot. After some research I decided to give a try to Altair.
There are already QGIS plugins for time-series visualisation, but as far as I'm aware, none for plotting time-series at vector-level, interactively clicking on a map and selecting a Polygon. So that's why I decided to go for a self-made solution using Altair, maybe combining it with Folium to add functionalities later on.
I'm totally new to the Altair library (as well as Vega and Vega-lite), and quite new in datascience and data visualisation as well... so apologies in advance for my ignorance!
There are already well explained tutorials on how to plot time series with Altair (for example here, or in the official website). However, my study case has some particularities that, as far as I've seen, have not yet been approached altogether.
The data is produced using the Python API for Google Earth Engine and preprocessed with Python and the pandas/geopandas libraries:
In Google Earth Engine, a vegetation index (NDVI in the current case) is computed at pixel-level for a certain region of interest (ROI). Then the function image.reduceRegions() is mapped across the ImageCollection to compute the mean of the ndvi in every polygon of a FeatureCollection element, which represent agricultural parcels. The resulting vector file is exported.
Under a Jupyter-lab environment, the data is loaded into a geopandas GeoDataFrame object and preprocessed, transposing the DataFrame and creating a datetime column, among others, in order to have the data well-shaped for time-series representation with Altair.
Data overview after preprocessing:
My "final" goal would be to show, in the same graphic, an interactive line plot with a set of lines representing each one an agricultural parcel, with parcels categorized by crop types in different colours, e.g. corn in green, wheat in yellow, peer trees in brown... (the information containing the crop type of each parcel can be added to the DataFrame making a join with another DataFrame).
I am thinking of something looking more or less like the following example, with legend's years being the parcels coloured by crop types:
But so far I haven't managed to make my data look this way... at all.
As you can see there are many nulls in the data (this is due to the application of a cloud masking function and to the fact that there are several Sentinel-2 orbits intersecting the ROI). I would like to just omit the non-null values for earch column/parcel, but I don't know if this data configuration can pose problems (any advice on that?).
So far I got:
The generation of the preceding graphic, for a single parcel, takes already around 23 seconds. Which is something maybe shoud/cloud be improved (how?)
And more importantly, the expected line representing the item/polygon/parcel's values (NDVI) is not even shown in the plot (note that I chose a parcel containing rather few non-null values).
For sure I am doing many things wrong. Would be great to get some advice to solve (some of) them.
Sample of the data and code to reproduce the issue
Here's a text sample of the data in JSON format, and the code used to reproduce the issue is the following:
import pandas as pd
import geopandas as gpd
import altair as alt
df= pd.read_json(r"path\to\json\file.json")
df['date']= pd.to_datetime(df['date'])
print(gdf.dtypes)
df
Output:
lines=alt.Chart(df).mark_line().encode(
x='date:O',
y='17811:Q',
color=alt.Color(
'17811:Q', scale=alt.Scale(scheme='redyellowgreen', domain=(-1, 1)))
)
lines.properties(width=700, height=600).interactive()
Output:
Thanks in advance for your help!
If I understand correctly, it is mostly the format of your dataframe that needs to be changed from wide to long, which you can do either via .melt in pandas or .transform_fold in Altair. With melt, the default names are 'variable' (the previous columns name) and 'value' (the value for each column) for the melted columns:
alt.Chart(df.melt(id_vars='date'), width=500).mark_line().encode(
x='date:T',
y='value',
color=alt.Color('variable')
)
The gaps comes from the NaNs; if you want Altair to interpolate missing values, you can drop the NaNs:
alt.Chart(df.melt(id_vars='date').dropna(), width=500).mark_line().encode(
x='date:T',
y='value',
color=alt.Color('variable')
)
If you want to do it all in Altair, the following is equivalent to the last pandas example above (the transform uses 'key' instead of 'variable' as the name for the former columns). I also use and ordinal instead of nominal type for the color encoding to show how to make the colors more similar to your example.:
alt.Chart(df, width=500).mark_line().encode(
x='date:T',
y='value:Q',
color=alt.Color('key:O')
).transform_fold(
df.drop(columns='date').columns.tolist()
).transform_filter(
'isValid(datum.value)'
)

The use of librosa.effects.trim to remove the silent part in audio

I am doing a speech emotion recognition ML.
I currently use pyAudioAnalysis to do a multi-directory feature extraction. However, the dataset involved in audios containing a lot of approximately silent sections. My objective is to remove the approximately silent parts from all the audios then extract meaningful features.
My current approach is to use librosa to trim the silent parts.
from librosa.effects import trim
import librosa
from pyAudioAnalysis import audioBasicIO
import matplotlib.pyplot as plt
signal, Fs = librosa.load(file_directory)
trimed_signal = trim(signal,top_db=60)
fig, ax = plt.subplots(nrows=3, sharex=True, sharey=True)
librosa.display.waveplot(trimed_signal, sr=Fs, ax=ax[0])
ax[0].set(title='Monophonic')
ax[0].label_outer()
I tried to plot the wave after trimming using librosa.display.waveplot but an AttributeError occurred showing AttributeError: module 'librosa' has no attribute 'display'
My questions are
How to plot the trimmed wave?
Is it possible to generate a trimmed .wav file? This is because pyAudioAnalysis's input for feature extraction is .wav file path but the output of librosa is array.
You need to import librosa.display separately. See this issue for the reason.
You can use librosa.output.write_wav (check the docs) to store the trimmed array as a wave file. E.g. librosa.output.write_wav(path, trimed_signal, Fs).

How to use external data in an OSRM profile

It this Mapbox blog post, Lauren Budorick shares how they got working a routing engine with OSRM that uses elevation data in order to give cyclists better routes... AMAZING!
I also want to explore the potential of OSRM's routing when plugging in external (user-generated) data, but I'm still having a hard time grasping how OSRM's profiles work. I think I get the main idea, that every way (or node?) is piped into a few functions that, all toghether, scores how good that path is.
But that's it, there are plenty of missing parts in my head, like what do each of the functions Lauren uses in her profile do. If anyone could point me to some more detailed information on how all of this works, you'd make my next week much, much easier :)
Also, in Lauren's post, inside source_function she loads a ./srtm_bayarea.asc file. What does that .asc file looks like? How would one generate a file like that one from, let's say, data stored in a pgsql database? Can we use some other format, like GeoJSON?
Then, when in segment_function she uses things like source.lon and target.lat, are those refered to the raw data stored in the asc file? Or is that file processed into some standard that maps everything to comply it?
As you can see, I'm a complete newbie on routing and maybe GIS in general, but I'd love to learn more about this standards and tools that circle around the OSRM ecosystem. Can you share some tips with me?
I think I get the main idea, that every way (or node?) is piped into a few functions that, all toghether, scores how good that path is.
Right, every way and every node are scored as they are read from an OSM dump to determine passability of a node and speed of a way (used as the scoring heuristic).
A basic description of the data format can be found here. As it reads, data immediately available in ArcInfo ASCII grids includes SRTM data. Currently plaintext ASCII grids are the only supported format. There are several great Python tools for GIS developers that may help in converting other data types to ASCII grids - check out rasterio, for example. Here's an example of a really simple python script to convert NED IMGs to ASCII grids:
import sys
import rasterio as rio
import numpy as np
args = sys.argv[1:]
with rio.drivers():
with rio.open(args[0]) as src:
elev = src.read()[0]
profile = src.profile
def shortify(x):
if x == profile['nodata']:
return -9999
elif x == np.finfo(x).tiny:
return 0
else:
return int(round(x))
out_elev = [map(shortify, row) for row in elev]
with open(args[0] + '.asc', 'a') as dst:
np.savetxt(dst, np.array(out_elev),fmt="%s",delimiter=" ")
source.lon and target.lat e.g: source and target are nodes provided as arguments by the extraction process. Their coordinates are used to look up data at each location during extraction.
Make sure to read thoroughly through the relevant wiki page (already linked).
Feel free alternately to open a Github issue in
https://github.com/Project-OSRM/osrm-backend/issues with OSRM
questions.

Convert .3DS or .OBJ files to .MD2

I am using Metaio's Creator to create an AR event and using a model the client purchased from TurboSquid.com. Everytime I try to convert the .3DS file to an .MD2 file I get an error that there are to many polygons.
Is there a program that can automatically convert the .3DS or .OBJ to an .MD2 without lowering the polygon count or automatically taking polygons out without risking the integrity of the model?
MD2 only supports 4096 polygons inherently. As #0r10n said, you have to reduce the number of polygons to make it working with MD2. For conversion, I had the best experience using the QTip plugin for 3Ds Max here: http://qtipplugin.com/
Very easy to use and very powerful.
If the model has too many polygons you can import it into a 3DCC-tool like Max, Maya or Blender and use their tools to reduce the polygon-count.
For example using Blender 2.49 you can use the PolyReducer-Script, that preserves UV-Coordinates.

Resources