Need to print certain letters after a particular word using shell script - devops

I am new to shell scripting and I want to print the IPs inside current cidr range and proposed cidr range below is the output
{
"acknowledgeRequiredBy": xxxxxx000,
"acknowledged": false,
"acknowledgedBy": "name#email.com",
"acknowledgedOn": xxxxxx00000,
"contacts": [
"dl#email.com",
"dl2#gmail.com"
],
"currentCidrs": [
"1.2.3.4/24",
"5.6.7.8/24",
"9.10.11.12/24",
"13.14.15.16/24",
"17.18.19.20/24",
"21.22.23.24/24",
"25.26.27.28/24",
],
"id": 1x4x3xx,
"latestTicketId": 0000,
"mapAlias": "sample name",
"mcmMapRuleId": 111xxx,
"proposedCidrs": [
"25.26.27.28/24",
"1.2.3.4/24",
"5.6.7.8/24",
"9.10.11.12/24",
],
"ruleName": "namerule.com",
"service": "S",
"shared": na,
"sureRouteName": "example",
I want the output as only the IPs please help me with the logic

Related

How do I force dependencies into separate chunks using manualChunk?

manualChunks: {
'#dnd-kit/core': [
'#dnd-kit/core',
'#dnd-kit/sortable',
'#dnd-kit/utilities',
],
'#radix-ui/react-collapsible': [
'#radix-ui/react-collapsible',
'#radix-ui/react-dropdown-menu',
'#radix-ui/react-navigation-menu',
],
react: ['react', 'react-dom'],
'react-aria': [
'#react-aria/checkbox',
'#react-aria/dialog',
'#react-aria/focus',
'#react-aria/meter',
'#react-aria/visually-hidden',
'#react-aria/utils',
'#react-aria/tooltip',
'#react-aria/overlays',
'#react-aria/radio',
],
'react-stately': [
'#react-stately/radio',
'#react-stately/toggle',
'#react-stately/checkbox',
],
slate: [
'#contra/slate',
'slate',
'slate-react',
'slate-history',
'slate-hyperscript',
],
'stream-chat': ['stream-chat', 'stream-chat-react'],
stripe: ['#stripe/react-stripe-js', '#stripe/stripe-js'],
visx: [
'#visx/curve',
'#visx/event',
'#visx/gradient',
'#visx/responsive',
'#visx/scale',
'#visx/shape',
'#visx/tooltip',
],
},
The above is my rollupjs manualChunk configuration.
Based on this configuration, I would expect to have several chunks that group together these dependencies.
However, almost everything ends up in stream-chat chunk.
What's happening and how do I achieve the desired result?

Telegraf splits input data into different outputs

I want to write Telegraf config file which will:
Uses openweathermap input or custom http request result
{
"fields": {
...
"humidity": 97,
"temperature": -11.34,
...
},
"name": "weather",
"tags": {...},
"timestamp": 1675786146
}
Splits result on two similar JSONs:
{
"sensorID": "owm",
"timestamp": 1675786146,
"value": 97,
"type": "humidity"
}
and
{
"sensorID": "owm",
"timestamp": 1675786146,
"value": -11.34,
"type": "temperature"
}
Sends this JSONs into MQTT queue
Is it possible or I must create two different configs and make two api calls?
I found next configuration which solves my problem:
[[outputs.mqtt]]
servers = ["${MQTT_URL}", ]
topic_prefix = "owm/data"
data_format = "json"
json_transformation = '{"sensorID":"owm","type":"temperature","value":fields.main_temp,"timestamp":timestamp}'
[[outputs.mqtt]]
servers = ["${MQTT_URL}", ]
topic_prefix = "owm/data"
data_format = "json"
json_transformation = '{"sensorID":"owm","type":"humidity","value":fields.main_humidity,"timestamp":timestamp}'
[[inputs.http]]
urls = [
"https://api.openweathermap.org/data/2.5/weather?lat={$LAT}&lon={$LON}2&appid=${API_KEY}&units=metric"
]
data_format = "json"
Here we:
Retrieve data from OWM in input plugin.
Transform received data structure in needed structure in two different output plugins. We use this language https://jsonata.org/ for this aim.

special character conversation in bash output

Hi :P I'm a beginner here and i was doing a small project in bash to automatically create prewritten text files when at the last moment I realized that characters like "" (example: "format_version" goes out format_version)were deleted when writing output to my file. Please help me ('-')
python ID_1.py
python ID_2.py
echo "bash(...)"
ID1=$(cat link1.txt)
ID2=$(cat link2.txt)
echo "Name of your pack=$NP"
read NP
echo "This description=$DC"
read DC
cd storage/downloads
echo "{
("format_version"): 1,
"header": {
"name": "$NP",
"description": "$DC",
"uuid": "$ID1",
"version": [0, 0, 1]
},
"modules": [
{
"type": "resources",
"uuid": "$ID2",
"version": [0, 0, 1]
}
]
}" > manisfest.json

Is there a way to use OCR to extract specific data from a CAD technical drawing?

I'm trying to use OCR to extract only the base dimensions of a CAD model, but there are other associative dimensions that I don't need (like angles, length from baseline to hole, etc). Here is an example of a technical drawing. (The numbers in red circles are the base dimensions, the rest in purple highlights are the ones to ignore.) How can I tell my program to extract only the base dimensions (the height, length, and width of a block before it goes through the CNC)?
The issue is that the drawings I get are not in a specific format, so I can't tell the OCR where the dimensions are. It has to figure out on its own contextually.
Should I train the program through machine learning by running several iterations and correcting it? If so, what methods are there? The only thing I can think of are Opencv cascade classifiers.
Or are there other methods to solving this problem?
Sorry for the long post. Thanks.
I feel you... it's a very tricky problem, and we spent the last 3 years finding a solution for it. Forgive me for mentioning the own solution, but it will certainly solve your problem: pip install werk24
from werk24 import Hook, W24AskVariantMeasures
from werk24.models.techread import W24TechreadMessage
from werk24.utils import w24_read_sync
from . import get_drawing_bytes # define your own
def recv_measures(message: W24TechreadMessage) -> None:
for cur_measure in message.payload_dict.get('measures'):
print(cur_measure)
if __name__ == "__main__":
# define what information you want to receive from the API
# and what shall be done when the info is available.
hooks = [Hook(ask=W24AskVariantMeasures(), function=recv_measures)]
# submit the request to the Werk24 API
w24_read_sync(get_drawing_bytes(), hooks)
In your example it will return for example the following measure
{
"position": <STRIPPED>
"label": {
"blurb": "ø30 H7 +0.0210/0",
"quantity": 1,
"size": {
"blurb": "30",
"size_type":" "DIAMETER",
"nominal_size": "30.0",
},
"unit": "MILLIMETER",
"size_tolerance": {
"toleration_type": "FIT_SIZE_ISO",
"blurb": "H7",
"deviation_lower": "0.0",
"deviation_upper": "0.0210",
"fundamental_deviation": "H",
"tolerance_grade": {
"grade":7,
"warnings":[]
},
"thread": null,
"chamfer": null,
"depth":null,
"test_dimension": null,
},
"warnings": [],
"confidence": 0.98810
}
or for a GD&T
{
"position": <STRIPPED>,
"frame": {
"blurb": "[⟂|0.05|A]",
"characteristic": "⟂",
"zone_shape": null,
"zone_value": {
"blurb": "0.05",
"width_min": 0.05,
"width_max": null,
"extend_quantity": null,
"extend_shape": null,
"extend": null,
"extend_angle": null
},
"zone_combinations": [],
"zone_offset": null,
"zone_constraint": null,
"feature_filter": null,
"feature_associated": null,
"feature_derived": null,
"reference_association": null,
"reference_parameter": null,
"material_condition": null,
"state": null,
"data": [
{
"blurb": "A"
}
]
}
}
Check the documentation on Werk24 for details.
Although a managed offering, Mixpeek is one free option:
pip install mixpeek
from mixpeek import Mixpeek
mix = Mixpeek(
api_key="my-api-key"
)
mix.upload(file_name="design_spec.dwg", file_path="s3://design_spec_1.dwg")
This /upload endpoint will extract the contents of your DWG file, then when you search for terms it will include the file_path so you can render it in your HTML.
Behind the scenes it uses the open source LibreDWG library to run a number of AutoCAD native commands such as DATAEXTRACTION.
Now you can search for a term and the relevant DWG file (in addition to the context in which it exists) will be returned:
mix.search(query="retainer", include_context=True)
[
{
"file_id": "6377c98b3c4f239f17663d79",
"filename": "design_spec.dwg",
"context": [
{
"texts": [
{
"type": "text",
"value": "DV-34-"
},
{
"type": "hit",
"value": "RETAINER"
},
{
"type": "text",
"value": "."
}
]
}
],
"importance": "100%",
"static_file_url": "s3://design_spec_1.dwg"
}
]
More documentation here: https://docs.mixpeek.com/

How to draw waveform from waveformdata object in iOS Swift?

[
{
"id": "48250",
"created_at": "2014-07-06 13:05:10",
"user_id": "7",
"duration": "7376",
"permalink": "shawne-back-to-the-roots-2-05072014",
"description": "Years: 2000 - 2005\r\nSet Time: Warm Up (11 pm - 01 am)\r\n",
"downloadable": "1",
"genre": "Drum & Bass",
"genre_slush": "drumandbass",
"title": "Shawne # Back To The Roots 2 (05.07.2014)",
"uri": "https:\/\/api-v2.hearthis.at\/\/shawne-back-to-the-roots-2-05072014\/",
"permalink_url": "http:\/\/hearthis.at\/\/shawne-back-to-the-roots-2-05072014\/",
"artwork_url": "http:\/\/hearthis.at\/_\/cache\/images\/track\/500\/801982cafc20a06ccf6203f21f10c08d_w500.png",
"background_url": "",
"waveform_data": "http:\/\/hearthis.at\/_\/wave_data\/7\/3000_4382f398c454c47cf171aab674cf00f0.mp3.js",
"waveform_url": "http:\/\/hearthis.at\/_\/wave_image\/7\/4382f398c454c47cf171aab674cf00f0.mp3.png",
"user": {
"id": "7",
"permalink": "shawne",
"username": "Shawne (hearthis.at)",
"uri": "https:\/\/api-v2.hearthis.at\/shawne\/",
"permalink_url": "http:\/\/hearthis.at\/shawne\/",
"avatar_url": "http:\/\/hearthis.at\/_\/cache\/images\/user\/512\/06a8299b0e7d8f2909a22697badd7c09_w512.jpg"
},
"stream_url": "http:\/\/hearthis.at\/shawne\/shawne-back-to-the-roots-2-05072014\/listen\/",
"download_url": "http:\/\/hearthis.at\/shawne\/shawne-back-to-the-roots-2-05072014\/download\/",
"playback_count": "75",
"download_count": "9",
"favoritings_count": "7",
"favorited": false,
"comment_count": "0"
}
]
This Api returns waveform url and waveformdata. How do I convert waveform data to draw waveform similar to image in waveformurl
Api - https://hearthis.at/api-v2/
It looks like the "data" is merely a succession of bar heights:
[136,132,133,133,138,...]
So just draw a succession of bars at those heights (or heights proportional to them). You might need to draw just every nth bar, or maybe average each clump of n bars together, in order to get a neater representation (that is what they do at the site you pointed to).

Resources