I'm learning how to use Serverless Functions, I'm working trying to connect a Watson assistant through webhooks using a python action that is processing a small dataset, I'm still struggling to succeed on it.
I’ve done my coding on Jupyter environment calling raw csv dataset from Github and using pandas to handle it. The issue is when I’m invoking the action into IBM Functions works 10% of the times. I did debug on Jupyter and Visual Studio environments and the code seems to be ok, but once I move the code to the IBM Functions environment it doesn't perform.
import sys
import csv
import json
import pandas as pd
location = ('Germany') #Passing country parameter for testing purpose
data = pd.read_csv('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_daily_reports/03-24-2020.csv')
def main(args):
location = args.get("location")
for index, row in data.iterrows():
currentLoc = row['Country/Region']
if currentLoc == location:
covid_statistics = {
"Province/State": row['Province/State'],
"Country/Region": row['Country/Region'],
"Confirmed":row['Confirmed'],
"Deaths":row['Deaths'],
"Recovered":row['Recovered']
}
return {"message": covid_statistics}
else:
return {"message": "Data not available"}
Related
Where is MobyLCPSolver?
ImportError: cannot import name 'MobyLCPSolver' from 'pydrake.all' (/home/docker/drake/drake-build/install/lib/python3.8/site-packages/pydrake/all.py)
I have the latest Drake and cannot import it.
Can anyone help?
As of pydrake v1.12.0, the MobyLcp C++ API is not bound in Python.
However, if you feed an LCP into Solve() then Drake can choose Moby to solve it. You can take advantage of this to create an instance of MobyLCP:
import numpy as np
from pydrake.all import (
ChooseBestSolver,
MakeSolver,
MathematicalProgram,
)
prog = MathematicalProgram()
x = prog.NewContinuousVariables(2)
prog.AddLinearComplementarityConstraint(np.eye(2), np.array([1, 2]), x)
moby_id = ChooseBestSolver(prog)
moby = MakeSolver(moby_id)
print(moby.SolverName())
# The output is: "Moby LCP".
# The C++ type of the `moby` object is drake::solvers::MobyLCP.
That only allows for calling Moby via the MathematicalProgram interface, however. To call any MobyLCP-specific C++ functions like SolveLcpFastRegularized, those would need to be added to the bindings code specifically before they could be used.
You can file a feature request on the Drake GitHub page when you need access to C++ classes or functions that aren't bound into Python yet, or even better you can open a pull request with the bindings that you need.
i have a simple python application without any framework and API calls.
how i will monitor python application on instana kubernates.
i want code snippet to add with python application ,which trace application
and display on instana
how i will monitor python application on instana kubernates
There is publicly available guide, that should help you setting up the kubernetes agent.
i have a simple python application without any framework and API calls
Well, instana is for distributed tracing, meaning distributed components calling each other, each other's APIs predominantly by using known frameworks (with registered spans).
Nevertheless, you could make use of SDKSpan, here is a super simple example:
import os
os.environ["INSTANA_TEST"] = "true"
import instana
import opentracing.ext.tags as ext
from instana.singletons import get_tracer
from instana.util.traceutils import get_active_tracer
def foo():
tracer = get_active_tracer()
with tracer.start_active_span(
operation_name="foo_op",
child_of=tracer.active_span
) as foo_scope:
foo_scope.span.set_tag(ext.SPAN_KIND, "exit")
result = 20 + 1
foo_scope.span.set_tag("result", result)
return result
def main():
tracer = get_tracer()
with tracer.start_active_span(operation_name="main_op") as main_scope:
main_scope.span.set_tag(ext.SPAN_KIND, "entry")
answer = foo() + 21
main_scope.span.set_tag("answer", answer)
if __name__ == '__main__':
main()
spans = get_tracer().recorder.queued_spans()
print('\nRecorded Spans and their IDs:',
*[(index,
span.s,
span.data['sdk']['name'],
dict(span.data['sdk']['custom']['tags']),
) for index, span in enumerate(spans)],
sep='\n')
This should work in any environment, even without an agent and should give you an output like this:
Recorded Spans and their IDs:
(0, 'ab3af60079f3ca57', 'foo_op', {'span.kind': 'exit', 'result': 21})
(1, '53b67f7298684cb7', 'main_op', {'span.kind': 'entry', 'answer': 42})
Of course in a production, you wouldn't want to print the recorded spans, but send it to the well configured agent, so you should remove setting the INSTANA_TEST.
I I have performed some AWS Glue version 3.0 jobs testing using Docker containers as detailed here.
The following code outputs two lists, one per connection, with the names of the tables in a database:
import boto3
db_name_s3 = "s3_connection_db"
db_name_mysql = "glue_catalog_mysql_connection_db"
def retrieve_tables(database_name):
session = boto3.session.Session()
glue_client = session.client("glue")
response_get_tables = glue_client.get_tables(DatabaseName=database_name)
return response_get_tables
s3_tables_list = [table_dict["Name"] for table_dict in retrieve_tables(db_name_s3)["TableList"]]
mysql_tables_list = [table_dict["Name"] for table_dict in retrieve_tables(db_name_mysql)["TableList"]]
print(f"These are the tables from {db_name_s3} db: {s3_tables_list}\n")
print(f"These are the tables from {db_name_mysql} db {mysql_tables_list}")
Now, I try to create a dynamic dataframe with the from_catalog method in this way:
import sys
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
from awsglue.dynamicframe import DynamicFrame
source_activities = glueContext.create_dynamic_frame.from_catalog(
database = db_name,
table_name =table_name
)
When database="s3_connection_db", everything works fine, however, when I set database="glue_catalog_mysql_connection_db", I get the following error:
Py4JJavaError: An error occurred while calling o45.getDynamicFrame.
: java.lang.ClassNotFoundException: com.mysql.cj.jdbc.Driver
I understand the issue is related to the fact that I am trying to fetch data from a mysql table but I am not sure how to solve this. By the way, the job runs fine on the Glue console.
I would really appreciate some help, thanks!
I'm making a simple app to access google sheets that I have saved on google drive. I set up a project on google, created the Oauth credentials and ran the Python quickstart code to generate the token.json file.
Yesterday, after doing that, I ran this portion of the quickstart code and it ran perfectly and returned the rows from the sample spreadsheet:
###Add step to pull in previous staff comments, joino in MRN
from __future__ import print_function
from googleapiclient.discovery import build
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
from google.oauth2.credentials import Credentials
###This only runs when not connected to NetExtender. Should work when ported to citrix but running locally for testing is going to be difficult
###Gsheets
SCOPES = ['https://www.googleapis.com/auth/drive','https://www.googleapis.com/auth/spreadsheets']
# The ID and range of a sample spreadsheet.
SAMPLE_SPREADSHEET_ID = '1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms'
SAMPLE_RANGE_NAME = 'Class Data!A2:E'
creds = Credentials.from_authorized_user_file('token.json', SCOPES)
service = build('sheets', 'v4', credentials=creds)
# Call the Sheets API
sheet = service.spreadsheets()
result = sheet.values().get(spreadsheetId=SAMPLE_SPREADSHEET_ID,
range=SAMPLE_RANGE_NAME).execute()
values = result.get('values', [])
if not values:
print('No data found.')
else:
print('Name, Major:')
for row in values:
# Print columns A and E, which correspond to indices 0 and 4.
print('%s, %s' % (row[0], row[4]))
However, today when I run that code, it doesn't work anymore. I get an error:
('invalid_scope: Bad Request', {'error': 'invalid_scope', 'error_description':'Bad Request'})
Are the token files single use and would I need to generate a new one every time I want to run this (that's the only reason I can imagine it would work fine yesterday but not today)? If that is the issue, is there a way to program this so that I don't need to re-authenticate in Google and create a new token file every time I want to run this?
Thanks!
I have been trying to build the example GetAll neo4j server extension, but unfortunately I cannot make it work. I installed the Windows version of neo4j and run it as a server. I also installed Python neo4jrestclient and I am accessing neo4j through Python scripts. The following works fine:
from neo4jrestclient.client import GraphDatabase
gdb = GraphDatabase("http://localhost:7474/db/data/")
print gdb.extensions
It gives me "CypherPlugin" and "GremlinPlugin". I want to build the example GetAll server extension, which is Java. I am using Eclipse. I am able to create the jar file in the folder "c:\neo4j_installation_root\neo4j-community-1.7\plugins\GetAll.jar", but when I restart the neo4j server and run the neo4jrestclient it does not show the GetAll server extension. I searched a lot, but in vain. I have lots of experience with C++ and Python, but new to Java. I will really appreciate some help to be able to build neo4j server extensions. It is critically important for my evaluation of neo4j.
Are you sure there is a META-INF/services etc listing the plugin class, and the jar file is created with intermediate dirs (which is not the default in Eclipse export settings) so dirs are seen by the classloader?
Check out the tips at http://docs.neo4j.org/chunked/snapshot/server-plugins.html
You can do get-all with Bulbs (http://bulbflow.com) without building an extension:
>>> from bulbs.neo4jserver import Graph
>>> g = Graph()
>>> g.vertices.get_all()
>>> g.edges.get_all()
Custom models work the same way:
# people.py
from bulbs.model import Node, Relationship
from bulbs.property import String, Integer, DateTime
from bulbs.utils import current_datetime
class Person(Node):
element_type = "person"
name = String(nullable=False)
age = Integer()
class Knows(Relationship):
label = "knows"
created = DateTime(default=current_datetime, nullable=False)
And then call get_all on the model proxies:
>>> from people import Person, Knows
>>> from bulbs.neo4jserver import Graph
>>> g = Graph()
>>> g.add_proxy("people", Person)
>>> g.add_proxy("knows", Knows)
>>> james = g.people.create(name="James")
>>> julie = g.people.create(name="Julie")
>>> g.knows.create(james, julie)
>>> g.people.get_all()
>>> g.knows.get_all()