Enable resolutions not exposed by display programmatically on NVIDIA GPUs - nvidia

I'm working on a solution where there is a need to set a custom resolution for a particular connected displays on a set of systems. What I have now works fine, but only as long as the "Enable resolutions not exposed by the display" option has been checked manually through the NVIDIA Control Panel (found under Display -> Change resolution > Customize... > Enable resolutions not exposed by the display).
Is there a way to enable this option programmatically, preferably through NVIDIA's core SDK - NVAPI.

Setting custom resolutions can be enabled through the ChangeDisplaySettingsEx function, exposed by the Windows API, by passing in CDS_ENABLE_UNSAFE_MODES as the fourth parameter dwflags. (To disable, use CDS_DISABLE_UNSAFE_MODES.)
Code extract exemplifying usage:
DWORD deviceIndex = 0;
DISPLAY_DEVICE displayDevice = { 0 };
displayDevice.cb = sizeof(DISPLAY_DEVICE);
while (EnumDisplayDevices(NULL, deviceIndex, &displayDevice, 0)) {
deviceIndex++;
DEVMODE deviceMode = { 0 };
deviceMode.dmSize = sizeof(DEVMODE);
if (!EnumDisplaySettings(displayDevice.DeviceName, ENUM_CURRENT_SETTINGS, &deviceMode))
continue;
auto result = ChangeDisplaySettingsEx(displayDevice.DeviceName, &deviceMode, NULL, CDS_ENABLE_UNSAFE_MODES, NULL);
if (result != DISP_CHANGE_SUCCESSFUL) {
// Handle failure here...
}
}
Note that this will enable unsafe graphics modes for all display devices.

Related

Xamarin.Forms Xam.Plugin.Media Take Picture on iOS

I'm using the Xam.Plugin.Media in my Forms app to take pictures.
I take the Image stream (GetStream) and convert to a byte[] and store in my DB.
I then use that as the image source.
On Android and UWP, its working fine.
On iOS, if the picture is taken in portrait mode, the image once, selected is always rotated 90 deg.
I will later, upload this to a server and that image could be used on a different device.
For this, I also tried the GetStreamWithImageRotatedForExternalStorage but in this case, I cant see the image at all.
There are bytes in the stream (I DisplayAlert the length) but the image does not display.
Any idea what I might be doing wrong?
My code:-
private async Task TakePicture(WineDetails details)
{
await CrossMedia.Current.Initialize();
if (CrossMedia.Current.IsCameraAvailable && CrossMedia.Current.IsTakePhotoSupported)
{
var file = await CrossMedia.Current.TakePhotoAsync(new Plugin.Media.Abstractions.StoreCameraMediaOptions
{
AllowCropping = true,
PhotoSize = Plugin.Media.Abstractions.PhotoSize.Medium,
SaveToAlbum = false,
RotateImage = true
});
if (file == null)
return;
using (var ms = new MemoryStream())
{
var stream = file.GetStreamWithImageRotatedForExternalStorage();
stream.CopyTo(ms);
details.LabelImage = ms.ToArray();
details.NotifyChange("ImageSource");
}
}
}
The image is updated in the page via the NotifyChange and looks like this:-
ImageSource.FromStream(() => new MemoryStream(this.LabelImage))
This works fine in all cases on Android and UWP, works on iOS using GetStream (except the image is incorrectly rotated) but does not work using GetStreamWithImageRotatedForExternalStorage on iOS.
Anyone else using this plugin?
Any idea why GetStream returns a rotated image?
Any idea why GetStreamWithImageRotatedForExternalStorage is not working?
Thanks
Update:-
Changed SaveToAlbum = true and when I open the gallery, the image is rotated 90 deg.
Have RotateImage = true which could cause the issue? I'll try setting it to false.
I still can't set the image source to the byte array of the image using GetStreamWithImageRotatedForExternalStorage.
using (var ms = new MemoryStream())
{
file.GetStreamWithImageRotatedForExternalStorage().CopyTo(ms);
details.LabelImage = ms.ToArray();
}
using the byte array for an image
return ImageSource.FromStream(() => new MemoryStream(this.LabelImage));
This does not work for me, GetStream works ok.
Update:-
Ok so, RotateImage = false + GetStreamWithImageRotatedForExternalStorage allows me to display the image but its still incorrectly rotated in my app and the gallery.
I'm using this plugin, which is similar (if not the same thing - I know James Montemagno has recently packaged/bundled his work with Xamarin).
If you check the issues board there, you'll see there are quite a few people that have similar troubles (image rotation on iOS). Almost every 'solution' mentions using GetStreamWithImageRotatedForExternalStorage.
My issue was similar - I was unable to take a photo on iOS in portrait mode, without other (non-ios Devices) rotating the image. I have tried for weeks to solve this issue, but support on the plugin seems to be quite limited.
Ultimately I had to solve this with a huge workaround - using a custom renderer extending from FFImageLoading to display our images and MetadataExtractor. We were then able to extract the EXIF data from the stream and apply a rotation transformation to the FFImageLoding image control.
The rotation information was stored in a sort of weird way, as a string. This is the method I'm using to extract the rotation information, and return the amount it needs to be rotated as an int. Note that for me, iOS was able to display the image correctly still, so it's only returned a rotation change for Android devices.
public static int GetImageRotationCorrection(byte[] image)
{
try
{
var directories = ImageMetadataReader.ReadMetadata(new MemoryStream(image));
if (Device.Android == "Android")
{
foreach (var directory in directories)
{
foreach (var tag in directory.Tags)
{
if (tag.Name == "Orientation")
{
if (tag.Description == "Top, left side(Horizontal / normal)")
return 0;
else if (tag.Description == "Left side, bottom (Rotate 270 CW)")
return 270;
else if (tag.Description == "Right side, top (Rotate 90 CW")
return 90;
}
}
}
}
return 0;
}
catch (Exception ex)
{
return 0;
}
}
Note that there is also a custom renderer for the image for FFImage Loading.
public class RotatedImage : CachedImage
{
public static BindableProperty MyRotationProperty = BindableProperty.Create(nameof(MyRotation), typeof(int), typeof(RotatedImage), 0, propertyChanged: UpdateRotation);
public int MyRotation
{
get { return (int)GetValue(MyRotationProperty); }
set { SetValue(MyRotationProperty, value); }
}
private static void UpdateRotation(BindableObject bindable, object oldRotation, object newRotation)
{
var _oldRotation = (int)oldRotation;
var _newRotation = (int)newRotation;
if (!_oldRotation.Equals(_newRotation))
{
var view = (RotatedImage)bindable;
var transformations = new System.Collections.Generic.List<ITransformation>() {
new RotateTransformation(_newRotation)
};
view.Transformations = transformations;
}
}
}
So, in my XAML - I had declared a RotatedImage instead of the standard Image. With the custom renderer, I'm able to do this and have the image display rotated the correct amount.
image.MyRotation = GetImageRotationCorrection(imageAsBytes)
It's a totally unnecessary workaround - but these are the lengths that I had to go to to get around this issue.
I'll definitely be following along with this question, there might be someone in the community who could help us both!
The SaveMetaData flag is causing the rotation issue.
Setting it to false (default is true) now displays the photo correctly.
One side effect of that, the image no longer appears in the gallery if SaveToAlbum=true.
Still can't use an image take when using GetStreamWithImageRotatedForExternalStorage, even using FFImageLoading.
I found that while using Xam.Plugin.Media v5.0.1 (https://github.com/jamesmontemagno/MediaPlugin), the combination of three different inputs produced different results on Android vs. iOS:
StoreCameraMediaOptions.SaveMetaData
StoreCameraMediaOptions.RotateImage
Using MediaFile.GetStream() vs. MediaFile.GetStreamWithImageRotatedForExternalStorage()
On Android, SaveMetaData = false, RotateImage = true, and using MediaFile.GetStreamWithImageRotatedForExternalStorage() worked for me whether I was saving the result stream externally or processing the result stream locally for display.
On iOS, the combination of RotateImage = true and StreamRotated = true would result in a NullReferenceException coming out of the plugin library. Using MediaFile.GetStreamWithImageRotatedForExternalStorage() appeared to have no impact on behaivor.
--
Before going further, it's important to understand that image orientation in the JPEG format (which Xam.Plugin.Media seems to use) isn't as straightforward as you might think. Rather than rotating the raw image bytes 90 or 180 or 270 degrees, JPEG orientation can be set through embedded EXIF metadata. Orientation issues will happen with JPEGs either if EXIF data is stripped or if downstream consumers don't handle the EXIF data properly.
The approach I landed on was to normalize JPEG image orientation at the point the image is captured without relying on EXIF metadata. This way, downstream consumers shouldn't need to be relied on to properly inspect and handle EXIF orientation metadata.
The basic solution is:
Scan a JPEG for EXIF orientation metadata
Transform the JPEG to rotate/flip as needed
Set the result JPEG's orientation metadata to default
--
Code example compatible with Xamarin, using ExifLib.Standard (1.7.0) and SixLabors.ImageSharp (1.0.4) NuGet packages. Based on (Problem reading JPEG Metadata (Orientation))
using System;
using System.IO;
using ExifLib;
using SixLabors.ImageSharp;
using SixLabors.ImageSharp.Formats.Jpeg;
using SixLabors.ImageSharp.Metadata.Profiles.Exif;
using SixLabors.ImageSharp.Processing;
namespace Your.Namespace
{
public static class ImageOrientationUtility
{
public static Stream NormalizeOrientation(Func<Stream> inputStreamFunc)
{
using Stream exifStream = inputStreamFunc();
using var exifReader = new ExifReader(exifStream);
bool orientationTagExists = exifReader.GetTagValue(ExifTags.Orientation, out ushort orientationTagValue);
if (!orientationTagExists)
// You may wish to do something less aggressive than throw an exception in this case.
throw new InvalidOperationException("Input stream does not contain an orientation EXIF tag.");
using Stream processStream = inputStreamFunc();
using Image image = Image.Load(processStream);
switch (orientationTagValue)
{
case 1:
// No rotation required.
break;
case 2:
image.Mutate(x => x.RotateFlip(RotateMode.None, FlipMode.Horizontal));
break;
case 3:
image.Mutate(x => x.RotateFlip(RotateMode.Rotate180, FlipMode.None));
break;
case 4:
image.Mutate(x => x.RotateFlip(RotateMode.Rotate180, FlipMode.Horizontal));
break;
case 5:
image.Mutate(x => x.RotateFlip(RotateMode.Rotate90, FlipMode.Horizontal));
break;
case 6:
image.Mutate(x => x.RotateFlip(RotateMode.Rotate90, FlipMode.None));
break;
case 7:
image.Mutate(x => x.RotateFlip(RotateMode.Rotate270, FlipMode.Horizontal));
break;
case 8:
image.Mutate(x => x.RotateFlip(RotateMode.Rotate270, FlipMode.None));
break;
}
image.Metadata.ExifProfile.SetValue(ExifTag.Orientation, (ushort)1);
var outStream = new MemoryStream();
image.Save(outStream, new JpegEncoder{Quality = 100});
outStream.Position = 0;
return outStream;
}
}
}
And to use in conjunction with Xam.Plugin.Media:
MediaFile photo = await CrossMedia.Current.TakePhotoAsync(options);
await using Stream stream = ImageOrientationUtility.NormalizeOrientation(photo.GetStream);

Scaling Image Causes Crash In AS3 Flex AIR Mobile App

Problem:
Zooming in on image by scaling and moving using matrix causes the app to run out of memory and crash.
Additional Libraries used:
Gestouch - https://github.com/fljot/Gestouch
Description:
In my Flex Mobile app I have an Image inside a Group with pan/zoom enabled using the Gestouch library. The zoom works to an extent but causes the app to die (not freeze, just exit) with no error message after a certain zoom level.
This would be manageable except I can’t figure out how to implement a threshold to stop the zoom at, as it crashes at a different zoom level almost every time. I also use dynamic images so the source of the image could be any size or resolution.
They are usually JPEGS ranging from about 800x600 - 9000x6000 and are downloaded from a server so cannot be packaged with the app.
As of the AS3 docs there is no longer a limit to the size of the BitmapData object so that shouldn't be the issue.
“Starting with AIR 3 and Flash player 11, the size limits for a BitmapData object have been removed. The maximum size of a bitmap is now dependent on the operating system.”
The group is used as a marker layer for overlaying pins on.
The crash mainly happens on iPad Mini and older Android devices.
Things I have tried already tried:
1.Using Adobe Scout to pin point when the memory leak occurs.
2.Debugging to find the exact height and width of the marker layer and image at the time of crash.
3.Setting a max zoom variable based on the size of the image.
4.Cropping the image on zoom to only show the visible area. ( crashes on copyPixels function and BitmapData.draw() function )
5.Using imagemagick to make lower quality images ( small images still crash )
6.Using imagemagick to make very low res image and make a grid of smaller images . Displaying in the mobile app using a List and Tile layout.
7.Using weak references when adding event listeners.
Any suggestions would be appreciated.
Thanks
private function layoutImageResized(e: Event):void
{
markerLayer.scaleX = markerLayer.scaleY = 1;
markerLayer.x = markerLayer.y = 0;
var scale: Number = Math.min(width / image.sourceWidth , height / image.sourceHeight);
image.scaleX = image.scaleY = scale;
_imageIsWide = (image.sourceWidth / image.sourceHeight) > (width / height);
// centre image
if(_imageIsWide)
{
markerLayer.y = (height - image.sourceHeight * image.scaleY ) / 2 ;
}
else
{
markerLayer.x = (width -image.sourceWidth * image.scaleX ) / 2 ;
}
// set max scale
_maxScale = scale*_maxZoom;
}
private function onGesture(event:org.gestouch.events.GestureEvent):void
{
trace("Gesture start");
// if the user starts moving around while the add Pin option is up
// the state will be changed and the menu will disappear
if(currentState == "addPin")
{
return;
}
const gesture:TransformGesture = event.target as TransformGesture;
////trace("gesture state is ", gesture.state);
if(gesture.state == GestureState.BEGAN)
{
currentState = "zooming";
imgOldX = image.x;
imgOldY = image.y;
oldImgWidth = markerLayer.width;
oldImgHeight = markerLayer.height;
if(!_hidePins)
{
showHidePins(false);
}
}
var matrix:Matrix = markerLayer.transform.matrix;
// Pan
matrix.translate(gesture.offsetX, gesture.offsetY);
markerLayer.transform.matrix = matrix;
if ( (gesture.scale != 1 || gesture.rotation != 0) && ( (markerLayer.scaleX < _maxScale && markerLayer.scaleY < _maxScale) || gesture.scale < 1 ) && gesture.scale < 1.4 )
{
storedScale = gesture.scale;
// Zoom
var transformPoint:Point = matrix.transformPoint(markerLayer.globalToLocal(gesture.location));
matrix.translate(-transformPoint.x, -transformPoint.y);
matrix.scale(gesture.scale, gesture.scale);
/** THIS IS WHERE THE CRASH HAPPENS **/
matrix.translate(transformPoint.x, transformPoint.y);
markerLayer.transform.matrix = matrix;
}
}
I would say that's not a good idea to work with such a large image like (9000x6000) on mobile devices.
I suppose you are trying to implement some sort of map navigation so you need to zoom some areas hugely.
My solution would be to split that 9000x6000 into 2048x2048 pieces, then compress it using png2atf utility with mipmaps enabled.
Then you can use Starling to easily load these atf images and add it to stage3d and easily manage it.
In case you are dealing with 9000x6000 image - you'll get about 15 2048x2048 pieces, having them all added on the stage at one time you might think it would be heavy, but mipmaps will make it so that there are only tiny thumbnails of image are in memory until they are not zoomed - so you'll never run out of memory in case you remove invisible pieces from stage from time to time while zooming in, and return it back on zoom out

Qt iOS: change format of received frames from camera

Frames received from QVideoFilterRunnable::run method has format QVideoFrame::Format_NV12 so before load it to GPU texture using glTexImage I need to convert it to BGRA first. Is there any way to change the output format of camera?
Here is original problem:
Qt iOS: how to return QVideoFrame with type GLTextureHandle from QVideoFilterRunnable
Since I'm using QML this is instructions about getting QML object and using it in C++ code. I guess pure C++ stuff will be the same.
First you need to highlight camera object in QML source:
Camera {
id: camera
objectName: "CameraObject"
}
Get root object of QQuickView:
QQuickView view;
QQuickItem* root = view.rootObject();
assert(root != nullptr);
Get QML camera:
QObject* qmlCamera = root->findChild<QObject*>("CameraObject");
assert(qmlCamera != nullptr);
Get C++ camera from QML camera object:
QCamera* camera = qvariant_cast<QCamera*>(qmlCamera->property("mediaObject"));
assert(camera != nullptr);
Find format you need:
QCameraViewfinderSettings bestSetting;
assert(bestSetting.isNull()); // sanity check
auto viewfinderSettings = camera->supportedViewfinderSettings();
for (auto i: viewfinderSettings) {
if (i.pixelFormat() != QVideoFrame::Format_ARGB32) {
// skip non-ARGB formats
continue;
}
// check i.resolution()
// several settings with Format_ARGB32 will be available
// pick the one with resolution which fits best for you
bestSetting = i;
}
Check that something found and apply settings:
assert(bestSetting.pixelFormat() == QVideoFrame::Format_ARGB32);
camera->setViewfinderSettings(bestSetting);
Now view can be shown:
view.show();
return app.exec();

Nokia 5110 LCD initialization issue

I am trying to connect Nokia 5110 LCD to BeagleBone Black Rev-C over SPI protocol.
The connections are exactly as shown on the page 6 of:
Nokia5110-BeagleBone Black Connections
I wrote a C equivalent of Arduino's code for Philips PCD8544 (Nokia 3310) driver.
Where I export the required GPIO ports and send commands and data over SPI interface.
I successfully installed and ran Adafruit's python-library:
Adafruit Nokia LCD
My problem is
I have a strange issue, when I run this python code first and then my C code, the code works perfect!
But if I run my C code before the python code, I get no output. Logic says that the python
code must be initializing something that I am missing in my code.
Here's how I initialize the LCD:
fd_spi_dev = open(device, O_RDWR);
//set mode
mode = SPI_MODE_0;
ioctl(fd_spi_dev, SPI_IOC_WR_MODE, &mode);
ioctl(fd_spi_dev, SPI_IOC_RD_MODE, &mode);
//set max bitrate
speed = 4000000;
ioctl(fd_spi_dev, SPI_IOC_RD_MAX_SPEED_HZ, &speed);
ioctl(fd_spi_dev, SPI_IOC_WR_MAX_SPEED_HZ, &speed);
// set an msb first
lsbsetting = 0;
ioctl(fd_spi_dev, SPI_IOC_WR_LSB_FIRST, &lsbsetting);
// set bits per word
bits = 8;
ioctl(fd_spi_dev, SPI_IOC_WR_BITS_PER_WORD, &bits);
ioctl(fd_spi_dev, SPI_IOC_RD_BITS_PER_WORD, &bits);
lcd_write_cmd(0x21); // LCD extended commands
lcd_write_cmd(0xB8); // set LCD Vop (contrast)
lcd_write_cmd(0x04); // set temp coefficient
lcd_write_cmd(0x14); // set biad mode 1:40
lcd_write_cmd(0x20); // LCD basic commands
lcd_write_cmd(0x09); // LCD all segments on
/* I am expecting to see all segments lit here */
sleep(5);
lcd_write_cmd(0x0C); // LCD normal video
void lcd_write_cmd(uint8_t cmd) {
uint8_t *tx = &cmd;
uint8_t rx;
uint32_t len = 1;
struct spi_ioc_transfer tr = {
.tx_buf = (uint32_t)tx,
.rx_buf = (uint32_t)&rx,
.len = len,
.delay_usecs = delay,
.speed_hz = speed,
.bits_per_word = bits,
.cs_change = 1,
};
size = write(fd_dc_val, "0", 1);
size = write(fd_cs_val, "0", 1);
ioctl(fd_spi_dev, SPI_IOC_MESSAGE(1), &tr);
write(fd_cs_val, "1", 1);
}
I am a novice in embedded programming. I would greatly appreciate any help. Thank you.
If you're not missing an initialization step (and I haven't checked you against the 5110 datasheet), it must either be something wrong with your ioctls or a timing issue.
You could try using a library that abstracts away the ioctl calls to rule that out (I'm partial to my own: https://github.com/graycatlabs/serbus ;).
If it still doesn't work with that then I'd say it's probably a timing issue - Python is a lot slower than C when it comes to file I/O, so it might not be giving the LCD driver enough time to update after some of the commands - check the datasheet to see if it needs you to give it some time after any of the commands.

How to set camera FPS in OpenCV? CV_CAP_PROP_FPS is a fake

How to set Camera FPS?
May be
cvSetCaptureProperty(cameraCapture, CV_CAP_PROP_FPS, 30);
?
But it's return
HIGHGUI ERROR: V4L2: Unable to get property (5) - Invalid argument
Because there is no implementation in highgui/cap_v4l.cpp
static int icvSetPropertyCAM_V4L( CvCaptureCAM_V4L* capture,
int property_id, double value ){
static int width = 0, height = 0;
int retval;
/* initialization */
retval = 0;
/* two subsequent calls setting WIDTH and HEIGHT will change
the video size */
/* the first one will return an error, though. */
switch (property_id) {
case CV_CAP_PROP_FRAME_WIDTH:
width = cvRound(value);
if(width !=0 && height != 0) {
retval = icvSetVideoSize( capture, width, height);
width = height = 0;
}
break;
case CV_CAP_PROP_FRAME_HEIGHT:
height = cvRound(value);
if(width !=0 && height != 0) {
retval = icvSetVideoSize( capture, width, height);
width = height = 0;
}
break;
case CV_CAP_PROP_BRIGHTNESS:
case CV_CAP_PROP_CONTRAST:
case CV_CAP_PROP_SATURATION:
case CV_CAP_PROP_HUE:
case CV_CAP_PROP_GAIN:
case CV_CAP_PROP_EXPOSURE:
retval = icvSetControl(capture, property_id, value);
break;
default:
fprintf(stderr,
"HIGHGUI ERROR: V4L: setting property #%d is not supported\n",
property_id);
}
/* return the the status */
return retval;
}
How to solve it?
using the python wrappers for opencv, it worked for me to refer to the variable as:
cap = cv2.VideoCapture(1)
cap.set(cv2.cv.CV_CAP_PROP_FPS, 60)
I am using python 2.7.3 and opencv 2.4.8
The camera is the PS3 Eye
I don't know if that's still valid, but some time ago, something like one year and a half, I encountered that exactly problem. I contacted with a developer of OpenCV and he told me that the access and ability to change some of the properties of a capture weren't implemented yet and some other just worked for certain kinds of camera. I finally took a look to libdc1394 (working in linux) and made some functions that converted the data retrieved by libdc1394 to IplImages from OpenCV. It wasn't a such a tough task.
CV_CAP_PROP_FPS is a NOT a fake. See cap_libv4l.cpp(1) in OpenCV github repo. The key is to make sure, you use libv4l over v4l while configuring OpenCV. For that, before running cmake, install libv4l-dev
sudo apt-get install libv4l-dev
Now while configuring OpenCV with cmake, enable option, WITH_LIBV4L. If all goes good, in configuration status, you will see some thing similar to below
V4L/V4L2: Using libv4l1 (ver ) / libv4l2 (ver )
And then while building your OpenCV code, you will have to link with libv4l1/libv4l2/libv4lconvert.
Arbitary FPS values at the resolutions you choose, needn't be supported by your webcam. You may check supported resolutions/fps with a graphical tools like cheese or commands like lsusb (2)
check opencv2.4 handbook out, the video capture thing is much better than before,
->set(CV_CAP_PROP_FPS,30);works for me most of the times.
but a little bit low efficiency.
just in case you might not like the new opencv2.4 and still want to control your camera. check the videoinput lib here. it works good and using directshow features.
http://www.aishack.in/2010/03/capturing-images-with-directx/
http://www.muonics.net/school/spring05/videoInput/

Resources