How to extract screen size from title using Google Dialogflow - machine-learning

I have different laptops having title as below.
Acer One 10 S1002-15XR NT.G53SI.001 10.1 Inch Laptop (Quad Core/2GB/32GB eMMC/Win 10/Touch) Dark Silver
Acer One S1003 Nt Lcqsi 001 Hybrid (2 In 1) Intel Atom 2 Gb 25.65cm(10.1) Windows 10 Home - Black
Acer One S1003 Nt Lcqsi 001 Hybrid (2 In 1) Intel Atom 2 Gb 25.65cm(10.1") Windows 10 Home - Black
Acer One S1003 Nt Lcqsi 001 Hybrid (2 In 1) Intel Atom 2 Gb 25.65cm(10.1Inch) Windows 10 Home - Black
HP Spectre 13 i7 8GB 512GB SSD 10.1 Full HD (1920x1080) Touch Back-lit KeyBoard Intel HD 620 No CD/DVD Drive Dark Ash
So all the above laptops has 10.1 inch screen size, but it is typed differently. So how can I generalize all these to common one as 10_inch using google's Dialogflow.
I have made screen_size entity like below.
But I don't want all possible screen sizes to be specified in entity.
Can we do this using system entity or composite entity?

Parsing this sort of product information is outside of the normal use case for Dialogflow, which is generally intended for use in building conversational experiences that involve natural language (such as chatbots).
If you're looking for an API, I had some success in extracting the information you are looking for using the Cloud Natural Language API.
There's a tool you can use to test it out; enter the string, hit "Analyze" and then click "Syntax". For all the examples you gave above, the screen size was extracted as a num part of speech.
Even so, these APIs were designed for use with natural language. The machine learning model it is based on was not trained on this sort of input text, so it can't extract meaning from it.
As an alternative, you could try training your own extractor using a machine learning toolkit such as Tensorflow, or just write some crazy regex or string parsing algorithm.

Related

Why isnt UWB technology used for big file transfers?

I am working on my thesis right now and i have to compare near field communication technologies (WPAN) which can transfer files.
everybody is talking how great UWB is for locating things and how fast it is, but there is no one (but apple) that used it for file transmission. But why? Its has a bigger bandwith then Wifi Peer to Peer?
Apple seems to use it for airdrop and both for android and ios there is an API to develop based on this technology. But it looks that its designed for location services and only work with specific devices for location. So I would not be able to use it for example to transfer files between iOS/Android and a Raspberry Pi in Near Field.
Can anyone explain me, if UWB can transfer files and, or why i should use Wifi-Direct instead of UWB if I want to transfer Files that are >1GB with the fastest speed (but without internet of course)
Thank you very much
IR-UWB is more popular than MC-UWB
UWB modulation schemes can broadly be divided into two categories
multi-carrier UWB (MC-UWB): used for high throughput data transmission, 480 Mbps
impulse-radio UWB (IR-UWB): used for localization, sensing.

Can a circle be printed/drawn in a printfile with DDS?

Using DDS I know I can print a box and color it in using the the BOX keyword:
A R BOX5 BOX(2.5 0.5 5.1 6.3 0.2 +
A (*COLOR *HIGHLIGHT 3 75)
Is there something similar to create a circle?
According to the DDS Reference: No. I can't find any keywords to directly draw a circle.
My guess is that back in the heyday of high volume impact printers, there was no fast way to print such circles. Note: Impact isn't necessarily the same as dot-matrix printers. Lines were possible with special characters, though. The "language" to steer such a printer was called SCS (SNA Character String).
But you can create a circle as desired with external programs, convert the result to a page segment, using specialized IBM software, and load that via DDS onto a page. See the PAGSEG keyword on the linked documentation for information and caveats. Especially the need to use AFP might pose a serious obstacle. (AFP is — overly simplified and thus not entirely correct — like PCL or PostScript, a page description language. IPDS can be roughly seen as equivalent to PJL.) Ricoh printers sometimes have native IPDS/AFP support. Also, there were some manufacturers for converter boxes, faking an SCS or even IPDS/AFP printer to the host side, and appearing as a PJL/PCL printer data generator to the printer.
The built-in Host Print Transform feature which can be enabled for printer devices converts the spooled output to PCL, so it can be sent to stock printers. The drawback is, it uses local CPU resources which might not be desired. Older releases of the OS might only support SCS with Host Print Transform.
Newer IBM I releases include InfoPrint Server, a java-based background task enabling to convert print jobs on the machine to PDF. I assume this should work with AFP. Not talking about resource usage, though…
Printing on IBM i is a deep rabbit hole in itself. See the accompanying documentation.

Formatting SD card as TexFAT for WinCe

I am using MicroSD cards as the storage on an embedded system running WinCe. Recently I have found Cards made by the same manufacturer in different parts of the world have differences and cause us issues.
I read at the SD Association about the formatting issues that windows formatters produce so I downloaded their SD Memory Card Formatter. That is good but we run our SD cards in WinCe as TexFat. So what I now do is format the card with a FAT32 partition so the PC will put the software onto the card. Then the WinCE system will format the other partition to TexFat and copy the software onto that on the first boot with the new card.
The question is what is the correct way to format a SD card as TexFat for WinCE from a PC? Any suggestions?
After lots of head scratching I found the answer is simple. The winCE partition has to be formatted TexFAT on the winCE system which was not a problem, the bit I did not know is that I needed to change the clusters. Once I changed the 8GB to have 4k clusters the time to write to the card decreased a lot. I have since tried other sizes and find that you need to play with the cluster size to get the optimum out of the card.

Desktop Duplication API & switchable graphics

The problem: calling IDXGIOutput1::DuplicateOutput method returns DXGI_ERROR_UNSUPPORTED when you run an application using discrete graphics controller on a machine with switchable graphics.
This answer shed some light on the issue. In short, the discrete graphics renders only a part of the screen and sends the data to the framebuffer of the intergrated graphics controller -- in other words all output always goes through the integrated graphics controller. It seems that this is why DuplicateOutput returns DXGI_ERROR_UNSUPPORTED.
I wrote a sample that gets all outputs and their videoadapters using winapi (EnumDisplayDevices function) & directx (IDXGIFactory::EnumAdapters method & IDXGIAdapter::EnumOutputs method) to compare on a machine with switchable graphics (Intel HD 4600 & NVIDIA 840M). This is the result:
Not sure how much correct is my may of comparison, but you can see that winapi says that DISPLAY1 belongs to Intel card and directx says DISPLAY1 belongs to NVIDIA card. One solution would be to duplicate the output of Intel card (because everything goes through it) but EnumOutputs returns no outputs for it.
Currently there is a workaround: always run an application that uses Duplication API using the integrated graphics controller.
The question: how to make DuplicateOutput work with the discrete graphics controller on a laptop with switchable graphics? Or it is a limitation of the Desktop Duplication API?
solved:
unfortunately this issue occurs because the Desktop Duplication API
does not support being run against the discrete GPU on a Microsoft
Hybrid system. By design, the call fails together with error code
DXGI_ERROR_UNSUPPORTED in such a scenario.
To work around this issue, run the application on the integrated GPU
instead of on the discrete GPU on a Microsoft Hybrid system.
from here: https://support.microsoft.com/en-us/kb/3019314

OpenCV with 2 cameras VC++

I am importing a source code for stereo visions. The next code of the author works. It takes two cameras sources. I have two different cameras currently and i receive images. Both works. It crashes at capture2. interesting part is that if i change the orders of the webcams(Unplugging them and invert the orders) the first camera it will be the second one. We it doesn't work? I tested also with Windows XP sp3 and Windows 7 X64. The same problem.
//---------Starting WebCam----------
capture1= cvCaptureFromCAM(1);
assert(capture1!=NULL); cvWaitKey(100);
capture2= cvCaptureFromCAM(2);
assert(capture2!=NULL);
Also If i use -1 for paramters the just give me the first one(all the time).
Or any method to capture two camers using function cvCaptureFrom
Firstly the cameras are generally numbered from 0 - is this just the problem?
Secondly, directshow and multiple USB webcams is notoriously bad in windows. Sometimes it will work with two identical camera, sometimes only if they are different.
You can also try a delay between initialising the cameras, sometimes one will lock the capture stream until it is sending data, preventing the other being detected.
Often the drivers assume they are the only camera and make incorrect calls to lock up the entire capture graph. This isn't helped by it being extremely complicated to write correct drivers+fdirectshow filters in Windows
some mother board can not work with some usb 2.0 cameras. one usb 2.0 camera take 40-60% of usb controller. solution is connect second usb 2.0 camera from pci2usb controller
Get 2 PS3 Eyes, around EUR 10 each, and the free codelaboratories.com SDK, this gets you support up to 2 cameras using C, C#, Java, and AS3 incl. examples etc. You also get FIXED frame rates up 75 fps # 640*480. Their free driver only version 5.1.1.0177 provides decent DirectShow component, but for a single camera only.
COmment for the rest: Multi-cam DirectShow drivers should be a default for any manufacturer, not providing this is a direct failure to implement THE VERY BASIC PORPUSE AND FEATURE OF USB as an interface. It is also VERY EASY to implement, compared to implementing the driver itself for a particular sensor / chipset.
Alternatives that are confirmed to work in identical pairs (via DirectShow):
Microsoft Lifecam HD Cinema (use general UVC driver if you can, less limited fps)
Logitech Webcam Pro 9000 (not to be confused with QuickCam Pro 9000, which DOES NOT work)
Creative VF0220
Creative VF0330
Canyon WCAMN-1N
If you're serious about your work, get a pair of machine vision cameras to get PERFORMANCE. Cheapest on the market, with german engineering quality, CCD, CMOS, mono, colour, GigE (ethernet), USB, FireWire, excellent range of dedicated drivers:
http://www.theimagingsource.com

Resources