DICOM - Digital Imaging and Communications in Medicine, is a standard for handling, storing, printing, and transmitting information in medical imaging. It includes a file format definition and a network communications protocol.
I want to write .dcm file in my ios Project.
Please suggest me any link.
Update: Imebra 4.2 includes the full set of ObjectiveC wrappers, which also work with Swift.
Original alswer:
Imebra allows to read and generate DICOM files also on iOS.
It compiles also for iOS and OS-X, it's written in C++ but it can be used with ObjectiveC method if the extension of the file is .mm instead of .m
Since version 4.0.8.1, Imebra contains also few objectiveC helpers that translate C++ strings to NSStrings (and vice-versa) and extract an UIImage (or NSImage) from an Imebra image
How to generate a DICOM file in Imebra (detailed instructions):
Create an empty dataset:
// We specify the transfer syntax and the charset
std::string transferSyntax(imebra::NSStringToString(#"1.2.840.10008.1.2.1"));
std::string encoding(imebra::NSStringToString(#"ISO 2022 IR 6"));
imebra::DataSet dataSet(transferSyntax, encoding);
Create an image, put it into the dataset:
// Create a 300 by 200 pixel image, 15 bits per color channel, RGB
std::string colorSpace(imebra::NSStringToString(#"RGB"));
imebra::Image image(300, 200, imebra::bitDepth_t::depthU16, colorSpace, 15);
{
std::unique_ptr<WritingDataHandlerNumeric> dataHandler(image.getWritingDataHandler());
// Set all the pixels to red
for(std::uint32_t scanY(0); scanY != 200; ++scanY)
{
for(std::uint32_t scanX(0); scanX != 300; ++scanX)
{
dataHandler->setUnsignedLong((scanY * 300 + scanX) * 3, 65535);
dataHandler->setUnsignedLong((scanY * 300 + scanX) * 3 + 1, 0);
dataHandler->setUnsignedLong((scanY * 300 + scanX) * 3 + 2, 0);
}
}
// dataHandler will go out of scope and will commit the data into the image
}
dataSet.setImage(0, image);
Save the dataset
std::string fileName(NSStringToString(#"path/to/file.dcm"));
imebra::CodecFactory::save(dataSet, fileName, imebra::codecType_t::dicom);
(disclusure: I'm the author of Imebra)
Related
I have added this (https://github.com/kewlbear/FFmpeg-iOS-build-script) version of ffmpeg to my project. I can't see the entry point to the library in the headers included.
How do I get access to the same text command based system that the stand alone application has, or an equivalent?
I would also be happy if someone could point me towards documentation that allows you to use FFmpeg without the command line interface.
This is what I am trying to execute (I have it working on windows and android using the CLI version of ffmpeg)
ffmpeg -framerate 30 -i snap%03d.jpg -itsoffset 00:00:03.23333 -itsoffset 00:00:05 -i soundEffect.WAV -c:v libx264 -vf fps=30 -pix_fmt yuv420p result.mp4
Actually you can build ffmpeg library including the ffmpeg binary's code (ffmpeg.c). Only thing to care about is to rename the function main(int argc, char **argv), for example, to ffmpeg_main(int argc, char **argv) - then you can call it with arguments just like you're executing ffmpeg binary. Note that argv[0] should contain program name, just "ffmpeg" should work.
The same approach was used in the library VideoKit for Android.
To do what you want, you have to use your compiled FFmpeg library in your code.
What you are looking for is exactly the code providing by FFmpeg documentation libavformat/output-example.c (that mean AVFormat and AVCodec FFmpeg's libraries in general).
Stackoverflow is not a "do it for me please" platform. So I prefer explaining here what you have to do, and I will try to be precise and to answer all your questions.
I assume that you already know how to link your compiled (static or shared) library to your Xcode project, this is not the topic here.
So, let's talk about this code. It creates a video (containing video stream and audio stream randomly generated) based on a duration. You want to create a video based on a picture list and sound file. Perfect, there are only three main modifications you have to do:
The end condition is not reaching a duration, but reaching the end of your file list (In code there is already a #define STREAM_NB_FRAMES you can use to iterate over all you frames).
Replace the dummy void fill_yuv_image by your own method that load and decode image buffer from file.
Replace the dummy void write_audio_frame by your own method that load and decode the audio buffer from your file.
(you can find "how to load audio file content" example on documentation starting at line 271, easily adaptable for video content regarding documentation)
In this code, comparing to your CLI, you can figure out that:
const char *filename; in the main should be you output file "result.mp4".
#define STREAM_FRAME_RATE 25 (replace it by 30).
For MP4 generation, video frames will be encoded in H.264 by default (in this code, the GOP is 12). So no need to precise libx264.
#define STREAM_PIX_FMT PIX_FMT_YUV420P represents your desired yuv420p decoding format.
Now, with these official examples and related documentation, you can achieve what you desire. Be careful that there is some differences between FFmpeg's version in these examples and current FFmpeg's version. For example:
st = av_new_stream(oc, 1); // line 60
Could be replaced by:
st = avformat_new_stream(oc, NULL);
st->id = 1;
Or:
if (avcodec_open(c, codec) < 0) { // line 97
Could be replaced by:
if (avcodec_open2(c, codec, NULL) < 0) {
Or again:
dump_format(oc, 0, filename, 1); // line 483
Could be replaced by:
av_dump_format(oc, 0, filename, 1);
Or CODEC_ID_NONE by AV_CODEC_ID_NONE... etc.
Ask your questions, but you got all the keys! :)
MobileFFMpeg is an easy to use pod for the purpose. Instructions on how to use MobileFFMpeg at: https://stackoverflow.com/a/59325680/1466453
MobileFFMpeg gives a very simple method for translating ffmpeg commands to your IOS objective-c program.
Virtually all ffmpeg commands and switches are supported. However you have to get the pod with appropriate license. e.g min-gpl will not give you features of libiconv. libiconv is convered in vidoe, gpl and full-gpl licenses.
Please highlight if you have specific issues regarding use of MobileFFMpeg
As part of my project I wanted to send stream of images using websockets from embedded machine to client application and display them in img tag to achieve streaming.
Firstly I tried to send raw RGB data (752*480*3 - something about 1MB) but in the end I got some problems with encoding image to png in javascript based on my RGB image so I wanted to try to encode my data to PNG firstly and then sent it using websockets.
The thing is, I am having some problems with encoding my data to PNG using OpenCV library that is already used in the project.
Firstly, some code:
websocketBrokerStructure.matrix = cvEncodeImage(0, websocketBrokerStructure.bgrImageToSend, 0);
websocketBrokerStructure.imageDataLeft = websocketBrokerStructure.matrix->rows * websocketBrokerStructure.matrix->cols * websocketBrokerStructure.matrix->step;
websocketBrokerStructure.imageDataSent = 0;
but I am getting strange error during execution of the second line:
terminate called after throwing an instance of 'std::logic_error'
what(): basic_string::_S_construct NULL not valid
and I am a bit confused why I am getting this error from my code.
Also I am wondering if I understand it right: after invoking cvEncodeImage (where bgrImage is IplImage* with 3 channels - BGR) I just need to iterate through data member of my CvMatto get all of the png encoded data?
The cvEncodeImage function takes as its first parameter the extension of the image you want to encode. You are passing 0, which is the same thing as NULL. That's why you are getting the message NULL not valid.
You should probably use this:
websocketBrokerStructure.matrix = cvEncodeImage(".png", websocketBrokerStructure.bgrImageToSend, 0);
You can check out the documentation of cvEncodeImage here.
You can check out some examples of cvEncodeImage, or its C++ brother imencode here: encode_decode_test.cpp. They also show some parameters you can pass to cvEncodeImage in case you want to adjust them.
I am using the OBJLoader to load a large 3D model (described in a .obj file) and I want to display the models name on its surfaces. Though it seems that Three.js can only display English characters. My question is how can I display Chinese characters in Three.js?
there are a couple of ways to display text (https://github.com/mrdoob/three.js/wiki/Text-in-Three.js), but since exporting a chinese font might be more difficult, it might be easier to draw the chinese characters to a canvas texture and use the texture as a material in the scene.
It seem that you can use Facetype.js to get a Chinese font library, for example"YaHei_Regular.typeface.json" , then you can show Chinese characters.
var fontLoader = new THREE.FontLoader();
fontLoader.load("YaHei_Regular.typeface.json", (font)=> {
this.font = font;
});
Another option is using msdf-bmfont-xml. This example uses Microsoft YaHei.
charset.txt —
你好,世界
You may need to install dependencies first. Then:
npm install msdf-bmfont-xml -g
msdf-bmfont -f json yahei.ttf -i charset.txt --pot --square
and finally, use three-bmfont-text to render the text.
Three.js doesn't display Chinese regularly because it doesn't support the charset. You have to load it dynamically.
It seems there're two methods to load: 1, new THREE.TTFLoader().load('*.ttf') , it loads a ttf file that support Chinese charset. But I failed.
2, new THREE.FontLoader().load('*.json') it loads a json file transformed by
the ttf on http://gero3.github.io/facetype.js/ website.
But firstly you have to find a complete ttf file. I tried 方正兰亭超细黑简体 and 方正赵佶瘦金书 which both work, you can google and download ttf file. I found some ttf can't be identified by three.js completely. You perhaps see some Chinese char display normally but others still display '?'.
The final code snippets as following:
const three_font = new THREE.FontLoader();
three_font.load('*.json', function (font_font) {
font=font_font
})
// finally add text with font
const geometry = new THREE.TextGeometry(
{
font: font,
size, height: h, curveSegments: 4, bevelThickness: 2, bevelSize: 2, bevelEnabled: true
});
geometry.computeBoundingSphere();
geometry.computeVertexNormals();
const mesh = new THREE.Mesh(geometry, pool.textMaterial);
mesh.position.set(x * deviation, y, z * deviation);
mesh.rotation.set(rx, ry, rz);
scene.add(mesh);
TJvDBImage is a good component that support several picture formats. In JvJVCLUtils, it mentioned that the supported format can be expanded by RegisterGraphicSignature procedure. In the comment it mentioned :
WHAT IT IS:
These are helper functions to register graphic formats than can
later be recognized from a stream, thus allowing to rely on the actual
content of a file rather than from its filename extension.
This is used in TJvDBImage and TJvImage.
IMAGE FORMATS:
The implementation is simple: Just register image signatures with
RegisterGraphicSignature procedure and the methods takes care
of the correct instantiation of the TGraphic object. The signatures
register at unit's initialization are: BMP, WMF, EMF, ICO, JPG.
If you got some other image library (such as GIF, PCX, TIFF, ANI or PNG),
just register the signature:
RegisterGraphicSignature(<string value>, <offset>, <class>)
or
RegisterGraphicSignature([<byte values>], <offset>, <class>)
This means:
When <string value> (or byte values) found at <offset> the graphic
class to use is <class>
For example (actual code of the initialization section):
RegisterGraphicSignature([$D7, $CD], 0, TMetaFile); // WMF
RegisterGraphicSignature([1, 0], 0, TMetaFile); // EMF
RegisterGraphicSignature('JFIF', 6, TJPEGImage);
You can also unregister signature. IF you want use TGIFImage instead of
TJvGIFImage, you can unregister with:
UnregisterGraphicSignature('GIF', 0);
or just
UnregisterGraphicSignature(TJvGIFImage); // must add JvGIF unit in uses clause
then:
RegisterGraphicSignature('GIF', 0, TGIFImage); // must add GIFImage to uses clause
I follow the instruction and Added GIFImage in the uses clause at that unit. Also, in procedure GraphicSignaturesNeeded I added :
RegisterGraphicSignature('GIF', 0, TGIFImage);
RegisterGraphicSignature([$4D, $4d, 0, $2A], 0, TWICImage); // TIFF
RegisterGraphicSignature([$49, $49, $2A, 0], 0, TWICImage); // TIFF
The TIFF info is based on
Tip: detecting graphic formats
Then I used the makemodified.bat to re-compile JVCL.
Before the change, loading image to the TJvDBImage will load the file and give endless error of "bitmap image not valid". After change, it refuse to load the file and give the same error for 1 time.
If I load GIF / TIFF image to the field using other tools, when displaying , it give endless error mentioned above. If I load the field content using the above link functions, it can display in a TImage perfectly.
So, what have I missed or doing wrong?
Thank you!
I am writing a qr code decoder using Zbar api. I am using windows pre built libraries . I used the following code to load the image to ZBar
IplImage *src=cvLoadImage("image.png",CV_LOAD_IMAGE_GRAYSCALE);
ImageScanner scanner;
scanner.set_config(ZBAR_NONE, ZBAR_CFG_ENABLE, 1);
int width = src->width;
int height = src->height;
uchar* raw = (uchar *)(src->imageData);
Image image(width, height, "Y800", raw, width * height);
int n = scanner.scan(image);
But it failed to decode the image. Am I using the correct way to read image data using opencv ? . When I tested only one image decoded and failed for all others . But it is working well when I used the zbarimg command line option