Hi :P I'm a beginner here and i was doing a small project in bash to automatically create prewritten text files when at the last moment I realized that characters like "" (example: "format_version" goes out format_version)were deleted when writing output to my file. Please help me ('-')
python ID_1.py
python ID_2.py
echo "bash(...)"
ID1=$(cat link1.txt)
ID2=$(cat link2.txt)
echo "Name of your pack=$NP"
read NP
echo "This description=$DC"
read DC
cd storage/downloads
echo "{
("format_version"): 1,
"header": {
"name": "$NP",
"description": "$DC",
"uuid": "$ID1",
"version": [0, 0, 1]
},
"modules": [
{
"type": "resources",
"uuid": "$ID2",
"version": [0, 0, 1]
}
]
}" > manisfest.json
Related
I want to write Telegraf config file which will:
Uses openweathermap input or custom http request result
{
"fields": {
...
"humidity": 97,
"temperature": -11.34,
...
},
"name": "weather",
"tags": {...},
"timestamp": 1675786146
}
Splits result on two similar JSONs:
{
"sensorID": "owm",
"timestamp": 1675786146,
"value": 97,
"type": "humidity"
}
and
{
"sensorID": "owm",
"timestamp": 1675786146,
"value": -11.34,
"type": "temperature"
}
Sends this JSONs into MQTT queue
Is it possible or I must create two different configs and make two api calls?
I found next configuration which solves my problem:
[[outputs.mqtt]]
servers = ["${MQTT_URL}", ]
topic_prefix = "owm/data"
data_format = "json"
json_transformation = '{"sensorID":"owm","type":"temperature","value":fields.main_temp,"timestamp":timestamp}'
[[outputs.mqtt]]
servers = ["${MQTT_URL}", ]
topic_prefix = "owm/data"
data_format = "json"
json_transformation = '{"sensorID":"owm","type":"humidity","value":fields.main_humidity,"timestamp":timestamp}'
[[inputs.http]]
urls = [
"https://api.openweathermap.org/data/2.5/weather?lat={$LAT}&lon={$LON}2&appid=${API_KEY}&units=metric"
]
data_format = "json"
Here we:
Retrieve data from OWM in input plugin.
Transform received data structure in needed structure in two different output plugins. We use this language https://jsonata.org/ for this aim.
I am new to shell scripting and I want to print the IPs inside current cidr range and proposed cidr range below is the output
{
"acknowledgeRequiredBy": xxxxxx000,
"acknowledged": false,
"acknowledgedBy": "name#email.com",
"acknowledgedOn": xxxxxx00000,
"contacts": [
"dl#email.com",
"dl2#gmail.com"
],
"currentCidrs": [
"1.2.3.4/24",
"5.6.7.8/24",
"9.10.11.12/24",
"13.14.15.16/24",
"17.18.19.20/24",
"21.22.23.24/24",
"25.26.27.28/24",
],
"id": 1x4x3xx,
"latestTicketId": 0000,
"mapAlias": "sample name",
"mcmMapRuleId": 111xxx,
"proposedCidrs": [
"25.26.27.28/24",
"1.2.3.4/24",
"5.6.7.8/24",
"9.10.11.12/24",
],
"ruleName": "namerule.com",
"service": "S",
"shared": na,
"sureRouteName": "example",
I want the output as only the IPs please help me with the logic
I have a couple of Eventbridge events ({"root": "child"} and {"root": ["child"]}) which I want to match against pattern {"root": ["child"]}.
Both these matches are valid according to aws events test-event-pattern -
aws events test-event-pattern --event '{"id": 1234567890, "source": "my-source", "detail-type": "my-detail-type", "detail": {"root": "child"}, "account": 1234567890, "region": "eu-west-1", "time": "1970-12-20T20:00:00Z"}' --event-pattern '{"source": ["my-source"], "detail": {"root": ["child"]}}' --output text
True
aws events test-event-pattern --event '{"id": 1234567890, "source": "my-source", "detail-type": "my-detail-type", "detail": {"root": ["child"]}, "account": 1234567890, "region": "eu-west-1", "time": "1970-12-20T20:00:00Z"}' --event-pattern '{"source": ["my-source"], "detail": {"root": ["child"]}}' --output text
True
But I can only get the former pattern to match when I use moto to mock the message flow (script at bottom)
% python scripts/demo/moto_events/test_moto.py
{"root": "child"} -> True
{"root": ["child"]} -> False
Have I made a mistake here or is moto events behaviour diverging from AWS Eventbridge ?
TIA
cc #BertBlommers
ps using moto 2.0.11
from moto import mock_events, mock_sqs
from botocore.exceptions import ClientError
import boto3, copy, json, yaml
EventBase=yaml.safe_load("""
Source: my-source
DetailType: my-detail-type
""")
Statement=yaml.safe_load("""
Effect: Allow
Principal:
Service: events.amazonaws.com
Action: "sqs:SendMessage"
Resource: "*"
""")
def drain_sqs(sqs, queueurl, nmax=100):
messages, count = [], 0
while True:
resp=sqs.receive_message(QueueUrl=queueurl)
if ("Messages" not in resp or
count > nmax):
break
messages+=resp["Messages"]
count+=1
return messages
#mock_events
#mock_sqs
def test_event_pattern(detail,
eventbase=EventBase,
statement=Statement):
events, sqs, = boto3.client("events"), boto3.client("sqs")
policy=json.dumps({"Version": "2012-10-17",
"Statement": [statement]})
queue=sqs.create_queue(QueueName="my-queue",
Attributes={"Policy": policy})
queueurl=queue["QueueUrl"]
queueattrs=sqs.get_queue_attributes(QueueUrl=queueurl)["Attributes"]
queuearn=queueattrs["QueueArn"]
pattern={"detail": detail["pattern"]}
events.put_rule(Name="my-rule",
State="ENABLED",
EventPattern=json.dumps(pattern))
events.put_targets(Rule="my-rule",
Targets=[{"Id": "my-target-id",
"Arn": queuearn}])
event=copy.deepcopy(eventbase)
event["Detail"]=json.dumps(detail["event"])
events.put_events(Entries=[event])
return drain_sqs(sqs, queue["QueueUrl"])
if __name__=="__main__":
try:
for detail in [{"event": {"root": "child"},
"pattern": {"root": ["child"]}},
{"event": {"root": ["child"]},
"pattern": {"root": ["child"]}}]:
messages=test_event_pattern(detail)
print ("%s -> %s" % (json.dumps(detail["event"]),
bool(len(messages))))
except ClientError as error:
print ("Error: %s" % str(error))
I'm trying to use OCR to extract only the base dimensions of a CAD model, but there are other associative dimensions that I don't need (like angles, length from baseline to hole, etc). Here is an example of a technical drawing. (The numbers in red circles are the base dimensions, the rest in purple highlights are the ones to ignore.) How can I tell my program to extract only the base dimensions (the height, length, and width of a block before it goes through the CNC)?
The issue is that the drawings I get are not in a specific format, so I can't tell the OCR where the dimensions are. It has to figure out on its own contextually.
Should I train the program through machine learning by running several iterations and correcting it? If so, what methods are there? The only thing I can think of are Opencv cascade classifiers.
Or are there other methods to solving this problem?
Sorry for the long post. Thanks.
I feel you... it's a very tricky problem, and we spent the last 3 years finding a solution for it. Forgive me for mentioning the own solution, but it will certainly solve your problem: pip install werk24
from werk24 import Hook, W24AskVariantMeasures
from werk24.models.techread import W24TechreadMessage
from werk24.utils import w24_read_sync
from . import get_drawing_bytes # define your own
def recv_measures(message: W24TechreadMessage) -> None:
for cur_measure in message.payload_dict.get('measures'):
print(cur_measure)
if __name__ == "__main__":
# define what information you want to receive from the API
# and what shall be done when the info is available.
hooks = [Hook(ask=W24AskVariantMeasures(), function=recv_measures)]
# submit the request to the Werk24 API
w24_read_sync(get_drawing_bytes(), hooks)
In your example it will return for example the following measure
{
"position": <STRIPPED>
"label": {
"blurb": "ø30 H7 +0.0210/0",
"quantity": 1,
"size": {
"blurb": "30",
"size_type":" "DIAMETER",
"nominal_size": "30.0",
},
"unit": "MILLIMETER",
"size_tolerance": {
"toleration_type": "FIT_SIZE_ISO",
"blurb": "H7",
"deviation_lower": "0.0",
"deviation_upper": "0.0210",
"fundamental_deviation": "H",
"tolerance_grade": {
"grade":7,
"warnings":[]
},
"thread": null,
"chamfer": null,
"depth":null,
"test_dimension": null,
},
"warnings": [],
"confidence": 0.98810
}
or for a GD&T
{
"position": <STRIPPED>,
"frame": {
"blurb": "[⟂|0.05|A]",
"characteristic": "⟂",
"zone_shape": null,
"zone_value": {
"blurb": "0.05",
"width_min": 0.05,
"width_max": null,
"extend_quantity": null,
"extend_shape": null,
"extend": null,
"extend_angle": null
},
"zone_combinations": [],
"zone_offset": null,
"zone_constraint": null,
"feature_filter": null,
"feature_associated": null,
"feature_derived": null,
"reference_association": null,
"reference_parameter": null,
"material_condition": null,
"state": null,
"data": [
{
"blurb": "A"
}
]
}
}
Check the documentation on Werk24 for details.
Although a managed offering, Mixpeek is one free option:
pip install mixpeek
from mixpeek import Mixpeek
mix = Mixpeek(
api_key="my-api-key"
)
mix.upload(file_name="design_spec.dwg", file_path="s3://design_spec_1.dwg")
This /upload endpoint will extract the contents of your DWG file, then when you search for terms it will include the file_path so you can render it in your HTML.
Behind the scenes it uses the open source LibreDWG library to run a number of AutoCAD native commands such as DATAEXTRACTION.
Now you can search for a term and the relevant DWG file (in addition to the context in which it exists) will be returned:
mix.search(query="retainer", include_context=True)
[
{
"file_id": "6377c98b3c4f239f17663d79",
"filename": "design_spec.dwg",
"context": [
{
"texts": [
{
"type": "text",
"value": "DV-34-"
},
{
"type": "hit",
"value": "RETAINER"
},
{
"type": "text",
"value": "."
}
]
}
],
"importance": "100%",
"static_file_url": "s3://design_spec_1.dwg"
}
]
More documentation here: https://docs.mixpeek.com/
I'm trying to have my Jenkins job download some files from Artifactory:
a/b/c
d1
file1
d2
file2
This is what I want to achieve:
x/y/z
d1
file1
d2
file2
and I have the following file spec:
{
"files": [{
"pattern": "a/b/c/*",
"target": "x/y/z/",
"flat": "false",
"recursive": "true",
}]
}
but what I end up with instead is
x/y/z/a/b/c
d1
file1
d2
file2
What am I doing wrong?
You should use the following pattern
{
"files": [
{
"pattern": "a/b/c/(*)",
"target": "x/y/z/{1}",
"flat": "true",
"recursive": "true",
"regexp": "true"
}
]
}
By setting flat to true artifacts are downloaded to the exact target path specified and their hierarchy in the source repository is ignored.