Can anyone tell me how to generate the replay file from pcap?
I want to know, how to generate the replay file from pcap? like this
('connect', 1, 0.0)
('send', 1, b'\x00\x00\x00\x85\xff\x53\x4d\x42\x72\x00\x00\x00\x00\x18\x53\xc0\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xfe\x00\x00\x40\x00\x00\x62\x00\x02\x50\x43\x20\x4e\x45\x54\x57\x4f\x52\x4b\x20\x50\x52\x4f\x47\x52\x41\x4d\x20\x31\x2e\x30\x00\x02\x4c\x41\x4e\x4d\x41\x4e\x31\x2e\x30\x00\x02\x57\x69\x6e\x64\x6f\x77\x73\x20\x66\x6f\x72\x20\x57\x6f\x72\x6b\x67\x72\x6f\x75\x70\x73\x20\x33\x2e\x31\x61\x00\x02\x4c\x4d\x31\x2e\x32\x58\x30\x30\x32\x00\x02\x4c\x41\x4e\x4d\x41\x4e\x32\x2e\x31\x00\x02\x4e\x54\x20\x4c\x4d\x20\x30\x2e\x31\x32\x00', 9.812499774852768e-05)
('recv', 1, 0.011641267999948468)
Related
Is there a built in way in Ruby/Rails to detect the mime type of a file given its URL, and not relying on its extension?
For example lets say there is an image file at an external URL that I want to get the mime type of: https://www.example.com/images/file
The file does not have an extension, but let's assume it is a jpeg.
How can I verify this/get the file's mime type in Rails? Would ideally love a way to do this with built in functionality and not have to rely on a third party gem.
I've looked over this question.
I don't think it's worth not using a third party gem for this. The problem space is well documented and stable, and most of the libraries are too.
But if you must, it can be done without an external gem. Especially if you're going to constrain yourself to a small subset of file types to "whitelist". The "magic number" pattern for most image files is pretty straightforward once you get the file on your disk:
image = File.new("filename.jpg","r")
irb(main):006:0> image.read(10)
=> "\xFF\xD8\xFF\xE0\x00\x10JFIF"
Marcel, which you linked in your reference, if nothing else, can be a great reference for the magic number sequences you'll need:
MAGIC = [
['image/jpeg', [[0, "\377\330\377"]]],
['image/png', [[0, "\211PNG\r\n\032\n"]]],
['image/gif', [[0, 'GIF87a'], [0, 'GIF89a']]],
['image/tiff', [[0, "MM\000*"], [0, "II*\000"], [0, "MM\000+"]]],
['image/bmp', [[0, 'BM', [[26, "\001\000", [[28, "\000\000"], [28, "\001\000"], [28, "\004\000"], [28, "\b\000"], [28, "\020\000"], [28, "\030\000"], [28, " \000"]]]]]]],
# .....
]
I am working on a video analysis project which requires to download videos from youtube and upload them on google cloud storage. I could not figure out a way to directly upload them to gcs thus, I tried to download them on local machine and then upload them to gcs.
I went through multiple articles on stackoverflow regarding the same and with the help of those I was able to come up with the following script.
I went through multiple articles on stackoverflow regarding the same such as
python: get all youtube video urls of a channel and
Download YouTube video using Python to a certain directory
and with the help of those I was able to come up with the following script.
import urllib.request
import json
from pytube import YouTube
import pickle
def get_all_video_in_channel(channel_id):
api_key = 'AIzaSyCK9eQlD1ptx0SKMsmL0srmL2ua9_EuwSs'
base_video_url = 'https://www.youtube.com/watch?v='
base_search_url = 'https://www.googleapis.com/youtube/v3/search?'
first_url = base_search_url+'key={}&channelId={}&part=snippet,id&order=date&maxResults=25'.format(api_key, channel_id)
video_links = []
url = first_url
while True:
inp = urllib.request.urlopen(url)
resp = json.load(inp)
for i in resp['items']:
if i['id']['kind'] == "youtube#video":
video_links.append(base_video_url + i['id']['videoId'])
try:
next_page_token = resp['nextPageToken']
url = first_url + '&pageToken={}'.format(next_page_token)
except:
break
return video_links
#Load the file containing all the youtube video url
load_url = get_all_video_in_channel(channel_id)
#Access all the youtube url in the list and store them on local machine. Need to figure out if there is a way to directly upload them to gcs
for i in range(0,len(load_url)):
YouTube('load_url[i]').streams.first().download('C:/Users/Tushar/Documents/Serato_Video_Intelligence/youtube_videos')
It works only for the first two video urls and then fails with the below error
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "C:\Python37\lib\site-packages\pytube\streams.py", line 217, in download
bytes_remaining = self.filesize
File "C:\Python37\lib\site-packages\pytube\streams.py", line 164, in filesize
headers = request.get(self.url, headers=True)
File "C:\Python37\lib\site-packages\pytube\request.py", line 21, in get
response = urlopen(url)
File "C:\Python37\lib\urllib\request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "C:\Python37\lib\urllib\request.py", line 531, in open
response = meth(req, response)
File "C:\Python37\lib\urllib\request.py", line 641, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Python37\lib\urllib\request.py", line 569, in error
return self._call_chain(*args)
File "C:\Python37\lib\urllib\request.py", line 503, in _call_chain
result = func(*args)
File "C:\Python37\lib\urllib\request.py", line 649, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: Forbidden
I was hoping if someone can please help me understand what is going wrong here and if could help me to resolve this issue. I desperately need this and have been unable to resolve the issue for some time.
Thanks a lot in advance !!
P.S. If possible, is there a way to directly upload them to gcs.
Seems like you could fall in a conflict with YouTube's the terms of service, I suggest to you check this document and put attention on the section number 5 letter B. [1]
[1]https://www.youtube.com/static?gl=US&template=terms
When processing a large single file, it can be broken up as so:
import dask.bag as db
my_file = db.read_text('filename', blocksize=int(1e7))
This works great, but the files I'm working with have a high level of redundancy and so we keep them compressed. Passing in compressed gzip files gives an error that seeking in gzip isn't supported and so it can't be read in blocks.
The documentation here http://dask.pydata.org/en/latest/bytes.html#compression suggests that some formats support random access.
The relevant internal code I think is here:
https://github.com/dask/dask/blob/master/dask/bytes/compression.py#L47
It looks like lzma might support it, but it's been commented out.
Adding lzma into the seekable_files dict like in the commented out code:
from dask.bytes.compression import seekable_files
import lzmaffi
seekable_files['xz'] = lzmaffi.LZMAFile
data = db.read_text('myfile.jsonl.lzma', blocksize=int(1e7), compression='xz')
Throws the following error:
Traceback (most recent call last):
File "example.py", line 8, in <module>
data = bag.read_text('myfile.jsonl.lzma', blocksize=int(1e7), compression='xz')
File "condadir/lib/python3.5/site-packages/dask/bag/text.py", line 80, in read_text
**(storage_options or {}))
File "condadir/lib/python3.5/site-packages/dask/bytes/core.py", line 162, in read_bytes
size = fs.logical_size(path, compression)
File "condadir/lib/python3.5/site-packages/dask/bytes/core.py", line 500, in logical_size
g.seek(0, 2)
io.UnsupportedOperation: seek
I assume that the functions at the bottom of the file (get_xz_blocks) for example can be used for this, but don't seem to be in use anywhere in the dask project.
Are there compression libraries that do support this seeking and chunking? If so, how can they be added?
Yes, you are right that the xz format can be useful to you. The confusion is, that the file may be block-formatted, but the standard implementation lzmaffi.LZMAFile (or lzma) does not make use of this blocking. Note that block-formatting is only optional for zx files, e.g., by using --block-size=size with xz-utils.
The function compression.get_xz_blocks will give you the set of blocks in a file by reading the header only, rather than the whole file, and you could use this in combination with delayed, essentially repeating some of the logic in read_text. We have not put in the time to make this seamless; the same pattern could be used to write blocked xz files too.
.tick script:
stream
|from()
.measurement('httpjson_example')
|alert()
.crit(lambda: "temperature" < 70)
// Whenever we get an alert write it to a file.
.message('test')
.log('/tmp/test.log')
Output test.log:
..."message":"test","CRITICAL","data":{"series":[{"name":"httpjson_example","tags":{"host":"influxdata","server":"http://...:8080/readings"},"columns":["time","dewPoint","heatIndex","humidity","response_time","temperature"],"values":[["2016-06-23T12:38:42Z",12.06,22.15,51.6,2.078549411,22.5]]}]}}
This script write to file but I just want string 'test' written.
At the moment this isn't possible without a bit of work writing your own UDF.
If you'd like to see this feature in Kapacitor, open a feature request that details your use case.
I am using Delphi 2010 and looking for a way to use CreateFile Windows API function to append data rather than overwriting it in the specified file?
I am not looking for an optional way to do it such as Append() or Rewrite() or similar. I am looking specifically to do this by use of CreateFile Windows API function.
I tried using:
// this will open existing file but will **overwrite** data in the file.
fHandle:= CreateFile(PChar(FName), GENERIC_READ or GENERIC_WRITE, 0,
nil, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, 0);
// this will recreate file each time therefore deleting its original content
fHandle:= CreateFile(PChar(FName), GENERIC_READ or GENERIC_WRITE, 0,
nil, CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL, 0);
Much appreciated,
I suspect that OPEN_ALWAYS is actually what you need here.
Opens a file, always.
If the specified file exists, the function succeeds and the last-error code is set to ERROR_ALREADY_EXISTS (183).
If the specified file does not exist and is a valid path to a writable location, the function creates a file and the last-error code is set to zero.
And if you are writing then you can remove GENERIC_READ.
Another problem that I anticipate is that when the file is opened, the file position is set to the beginning of the file. Seek to the end to deal with that.
Win32Check(SetFilePointerEx(fHandle, 0, nil, FILE_END));
Alternatively you can use FILE_APPEND_DATA instead of GENERIC_WRITE.
Handle:= CreateFile(PChar(Name), FILE_APPEND_DATA, 0,
nil, OPEN_ALWAYS, FILE_ATTRIBUTE_NORMAL, 0);
When you use FILE_APPEND_DATA, providing that you do not also use FILE_WRITE_DATA, all writes are made to the end of the file, irrespective of the current value of the file pointer.
The documentation says it like this:
For a file object, the right to append data to the file. (For local files, write operations will not overwrite existing data if this flag is specified without FILE_WRITE_DATA.)
Note that older versions of Delphi do not define FILE_APPEND_DATA and so you need to:
const
FILE_APPEND_DATA = $0004;
All this said, I suspect that, a stream or a writer class is a better option here. Are you sure you want to get down and dirty with the Win32 API?
Specify that you want File_Append_Data access in the second parameter without also requesting File_Write_Data access. Then all writes will be at the end of the file.
To open a file, creating it if it doesn't already exist, pass Open_Always for the dwCreationDisposition parameter. (There are only five possible values documented for that parameter, so it doesn't take long to look down the list and select the one that most closely matches your needs.)