How to pass img_texture to all image widgets of kv file so as to get continuous image loop - kivy

I want to pass that img_texture to Image widget of KV file. How can I do that?
python file:
python file
kv file :
kv file

Related

How do you load a file (.csv) into a Beeware/Briefcase application?

I am using kivy as the GUI and Briefcase as a packaging utility. My .kv file is in the appname/project/src/projectName/resources folder. I also need a .csv file, in the same folder, and want to use pandas with it. I have no problem with importing the packages (I added them to the .toml file). I can't use the full path because when I package the app, the path will be different on each computer. Using relative paths to the app.py file does not work, giving me a file not found error. Is there a way to read a file using a relative path (maybe the source parameter in the .toml file)?
kv = Builder.load_file('resources/builder.kv')
df = pd.read_csv('resources/chemdata.csv')
class ChemApp(App):
def build(self):
self.icon = 'resources/elemental.ico'
return kv
I just encountered and solved a similar problem with Briefcase, even though I was using BeeWare's Toga GUI.
In my case, the main Python file app.py had to access a database file resources/data.csv. In the constructor of the class where I create a main window in app.py, I added the following lines (The import line wasn't there, but included here for clarification):
from pathlib import Path
self.resources_folder = Path(__file__).joinpath("../resources").resolve()
self.db_filepath = self.resources_folder.joinpath("data.csv")
Then I used self.db_filepath to successfully open the CSV file on my phone.
__file__ returns the path to the current file on whatever platform or device.

python - how to pass slack token to python app from dockerfile

I have my python app, in which i am passing my slack tokens via json file.
so my config.json is :
{
"slack_bot_token":"xoxb-12345789",
"slack_signing_secret":"AAA23245GJ"
}
So my slackeventsapp.py is :
from slackeventsapi import SlackEventAdapter
from slackclient import SlackClient
import json
tokens = {}
with open('config.json') as json_data:
tokens = json.load(json_data)
EVENT_ADAPTER = SlackEventAdapter(tokens.get("slack_signing_secret"), "/slack/events")
SLACK_CLIENT = SlackClient(tokens.get("slack_bot_token"))
def PostMsg(channel, text):
SLACK_CLIENT.api_call("chat.postMessage", channel=channel, text=text)
so I need to pass the two values slack_signing_secret and slack_bot_token via dockerfile. Please help how cna we pass this.
Copy the config file using your Dockerfile. This way the config file will be in the built image.
Mount a volume where the config file is kept.
There might be other ways.

how to upload image in correct path - image: Property "image" expects a valid pathname as data in akeneo

i already installed akeneo 2.0
I need to upload an image via Xl file
but I give the path
showing
Warning
image: Property "image" expects a valid pathname as data, "/tmp/pim/upload_tmp_dir/01.png" given.
how I can solve this???
In order to import images while import products, you will have to include it in your zip archive, and use its relative path (compared to your product import file) in the archive.
e.g if your archive file has the following structure:
- products_import.xlsx
- images/
image1.png
you will have to put ./images/image1.png in the image column of your products_import.xlsx file
Then upload the zip archive (instead of only the Excel file) in the import profile:

How to load a mat file from a google storage bucket in jupyter notebook

I am trying to train a model on ~16gb of image data. I need to import an annotations.mat file from my Cloud Storage bucket. However, since loadmat requires a file path, I am not sure how to import a Google Storage bucket path. I tried to create a pickle file of the mat data, but Jupyter Notebook crashes.
Current attempt:
from google.cloud import storage
client = storage.Client()
bucket = client.get_bucket('bucket-id')
blob = bucket.get_blob('path/to/annotations.pkl')
# crashes here
print(blob.download_as_string())
I want to do something like:
import scipy.io as sio
client = storage.Client()
bucket = client.get_bucket('bucket-id')
matfile = sio.loadmat(buket_path + 'path/to/annotations.pkl')
Does anyone know how to load a mat file from a Cloud Storage bucket?
I haven't found any direct import from a blob object to a mat file in python. However there is a workaround that would solve the problem: instead of importing directly the blob object and read it through loadmat, create a temporary file and use the path for loadmat function.
In order to reproduce the scenario, I followed the Google Cloud Storage python example (uploaded a mat file to a bucket). The following python code downloads the blob object, reads it using loadmat, and finally it removes the file created:
from google.cloud import storage
import scipy.io
bucket_name = '<BUCKET NAME>'
mat_file_path = '<PATH>/<MAT FILENAME>'
temp_mat_filename = 'temp.mat'
storage_client = storage.Client()
bucket = storage_client.get_bucket(bucket_name)
blob = bucket.blob(mat_file_path)
# Download mat file to temporary mat file
blob.download_to_filename(temp_mat_filename)
# Get mat object from temporary mat file
mat = scipy.io.loadmat(temp_mat_filename)
# Remove temp_mat_filename file
# import os
# os.remove(temp_mat_filename)
Hope it helps :)
This code describe the uploading object to the bucket.
I add the url where you can find more info:
https://cloud.google.com/storage/docs/uploading-objects.

How do I batch extract metadata from DM3 files using ImageJ?

How can you extract metadata for a batch of images? My first thought was to record a macro and then modify it to operate on a list of file names.
In that vein, I tried recording a macro doing something like this:
Ctrl-o # Open a file
12.dm3Enter # Select file to open
Ctrl-i # Open metadata in a new window
Ctrl-s # Save file
Info for 12.txtEnter# Name of file being saved
Ctrl-w# Close current window
Ctrl-w# Close current window
These steps work when I do them manually. This results in the following macro, which seems to be missing most of what I tried to record:
open("/path/to/file/12.dm3");
run("Show Info...");
run("Close");
run("Close");
Modifying a Jython script that is supposed to extract dimension metadata from an image:
from java.io import File
from loci.formats import ImageReader
from loci.formats import MetadataTools
import glob
# Create output file
outFile = open('./pixel_sizes.txt','w')
# Get list of DM3 files
filenames = glob.glob('*.dm3')
for filename in filenames:
# Open file
file = File('.', filename)
# parse file header
imageReader = ImageReader()
meta = MetadataTools.createOMEXMLMetadata()
imageReader.setMetadataStore(meta)
imageReader.setId(file.getAbsolutePath())
# get pixel size
pSizeX = meta.getPixelsPhysicalSizeX(0)
# close the image reader
imageReader.close()
outFile.write(filename + "\t" + str(pSizeX) + "\n")
# Close the output file
outFile.close()
(Gist).
You could use getImageInfo() instead of run("Show Info..."). This will create a string in the macro containing the run("Show Info...") output, but can then be modified as you like. See http://rsb.info.nih.gov/ij/developer/macro/functions.html#getImageInfo for more information.

Resources