In my Blackberry app, I want to display a screen with a video and a lebel field. I am using the following code for video display and auto play.
VerticalFieldManager vfm = new VerticalFieldManager();
Player player = javax.microedition.media.Manager.createPlayer("file:///SDCard/BlackBerry/videos/sample.3gp");
player.realize();
videoControl = (VideoControl) player.getControl("VideoControl");
Field videoField = (Field) videoControl.initDisplayMode(VideoControl.USE_GUI_PRIMITIVE,"net.rim.device.api.ui.Field");
VolumeControl volume = (VolumeControl) player.getControl("VolumeControl");
volume.setLevel(30);
player.start();
vfm.add(videoField);
..................
......................
.............
add(vfm);
It is working well. But, when the video is over, it is showing a black screen, but I want to display the first screen when the display is over and video can be replayed if user wants.
What change is needed in my code to achieve this feature?
Related
I have video tag and he plays simple video. [works]
I have canvas2d with playing same video [works]
opencvjs video processing (canvas is output , video is input)- also works
I have three.js with plane mesh
texture = new THREE.CanvasTexture(this.$refs.testcanvas)
texture.needsUpdate = true;
materialLocal = new THREE.MeshBasicMaterial({ map: texture })
materialLocal.needsUpdate = true;
materialLocal.map.needsUpdate = true;
this.mainVideoMesh.material = materialLocal
this.mainVideoMesh.material.needsUpdate = true;
No hepls . I got just first image screen texture and than stops updating.
In runtime i found ->
this.scene.children[2].material.map.needsUpdate: undefined
Strange situation any suggestion.
When using a video as a data source for a texture, the idea is to use the THREE.VideoTexture class. This type of texture will automatically manage its needsUpdate flag in order to ensure new frames are correctly displayed on your plane mesh.
Using THREE.VideoTexture requires that you use the video element as an argument, not the canvas.
I am working with opencv,javacv,javafx.
I want to show the captured webcam not on the CanvasFrame but on scene of my running application.
Here is the code of method which is capturing,
#FXML
void openWebcamOnBtnClick(ActionEvent event)
{
Thread thread=new Thread()
{
#Override
public void run()
{
//super.run(); //To change body of generated methods, choose Tools | Templates.
opencv_highgui.CvCapture capture=opencv_highgui.cvCreateCameraCapture(0);
opencv_highgui.cvSetCaptureProperty(capture, opencv_highgui.CV_CAP_PROP_FRAME_HEIGHT, 360);
opencv_highgui.cvSetCaptureProperty(capture, opencv_highgui.CV_CAP_PROP_FRAME_WIDTH, 640);
opencv_core.IplImage grabbedImage=opencv_highgui.cvQueryFrame(capture);
CanvasFrame frame=new CanvasFrame("Webcam");
frame.setCanvasSize(640, 360);
while(frame.isVisible() && (grabbedImage=opencv_highgui.cvQueryFrame(capture))!=null)
{
frame.showImage(grabbedImage);
}
opencv_highgui.cvReleaseCapture(capture);
grabbedImage.release();
//frame.setDefaultCloseOperation(javax.swing.JFrame.EXIT_ON_CLOSE);
}
};
thread.start();
}
By doing so , webcam starts capturing perfectly fine and shows it on CanvasFrame(On new Stage). but i want it to show on the old stage, whenever i click on the button "Open Webcam"" which is on the old stage.
Can i show the capturing webcam in ImageView or any pane??
I used YouTubePlayerSupportFragment to play a YouTube video, but it plays the video for 2 or 3 second and stops automatically. I can't find what's wrong.
Below is my code for fragment translation
mYoutubePlayerFragment = new YouTubePlayerSupportFragment();
mYoutubePlayerFragment.initialize(
"key",
mListnnerYouTube);
FragmentManager fragmentManager = getChildFragmentManager();
FragmentTransaction fragmentTransaction = fragmentManager
.beginTransaction();
fragmentTransaction.replace(R.id.fragment_youtube_player,
mYoutubePlayerFragment);
fragmentTransaction.commit();
and my onInitializationSuccess
#Override
public void onInitializationSuccess(Provider arg0, YouTubePlayer player,
boolean wasRestored) {
// TODO Auto-generated method stub
activeplayer = player;
activeplayer.setPlayerStyle(YouTubePlayer.PlayerStyle.DEFAULT);
if (!wasRestored) {
// player.cueVideo("nCgQDjiotG0");
player.loadVideo("nCgQDjiotG0");
}
I used YouTubePlayerSupportFragment in fragment
error message
YouTube video playback stopped due to unauthorized overlay on top of player. The YouTubePlayerView is obscured by com.viewpagerindicator.CirclePageIndicator{420b9758 V.ED.... ......I. 0,412-720,465 #7f0a005c app:id/indicator}. The view is inside the YouTubePlayerView, with the distance in px between each edge of the obscuring view and the YouTubePlayerView being: left: 0, top: 412, right: 0, bottom: 0..
You're most likely displaying something on top of the player view, which is forbidden by the YouTube SDK.
Check the logs, you should see a warning from the YouTube SDK that specifies why the video playback has been stopped. If a View is covering part of the player, the log will specify its ID and its position on the screen.
My application contains a video with custom UIButtons to manage the video.
By using the Youtube iFrame api I have managed to play the youtube video in UIWebView and hide all its default controls(for fullscreen, volume, etc.).
Now I want to control the video through custom buttons.
How do i do that?
- fullscreen UIButton: to make the video fullscreen
- Mute/Unmute button: to mute/unmute the video
refer the screen from my other question:
objective-c: play video by removing the default fullscreen, etc functionality
How do i solve this? Code for video in UIWebview:
NSString *htmlString =#"<!DOCTYPE html><html> <body><div id=\"player\"></div><script>var tag = document.createElement('script');tag.src = \"https://www.youtube.com/iframe_api\";var firstScriptTag = document.getElementsByTagName('script')[0];firstScriptTag.parentNode.insertBefore(tag, firstScriptTag);var player;function onYouTubeIframeAPIReady() {player = new YT.Player('player', {height: '196',width: '309',videoId: 'GOiIxqcbzyM',playerVars: {playsinline: 1, controls: 0}, events: {'onReady': onPlayerReady,'onStateChange': onPlayerStateChange}});}function onPlayerReady(event) {event.target.playVideo();}var done = false;function onPlayerStateChange(event) {if (event.data == YT.PlayerState.PLAYING && !done) {setTimeout(stopVideo, 6000);done = true;}}function stopVideo() {}</script></body></html>";
_webViewVideo.delegate = self;
static NSString *youTubeVideoHTML = #"<iframe webkit-playsinline width=\"309\" height=\"200\" src=\"https://www.youtube.com/embed/GOiIxqcbzyM?feature=player_detailpage&playsinline=1\" frameborder=\"0\"></iframe>";
[_webViewVideo loadHTMLString:htmlString baseURL:[[NSBundle mainBundle] resourceURL]];
This combination does the video to hide controls
NSDictionary *playerVars = #{
#"controls" : #0,
#"playsinline" : #1,
#"autohide" : #1,
#"showinfo" : #0,
#"modestbranding" : #1,
#"rel":#0
};
To make the video full screen on button click, try saving the current video elapsed time and by changing playerVars to show controls and trigger full screen button action programmatically(the video plays in full screen) and the seek to method to play the video from elapsed time.
For mute/unmute , try reducing the current phone volume programmatically to mute and set it on unmute
i dont think apple provides the api to change the volume.
only user can do it using the hardware controls
I was thinking of making a windows form with a 50x50 space somewhere on it (bitmap?) and having the user draw (like MS Paint) inside the square. When the user is done, the picture can be saved by clicking on the "save" button and it will be updated in Game1 (for collision purposes of my game). I've seen some tutorials on here on how to draw on screen like MS Paint, but I can't seem to figure out how to SAVE that picture as a Texture2D/Rectangle. And how do I get a bitmap onto a windows form?
To save a bitmap as a png:
private void SaveBmpAsPNG(Bitmap bm)
{
bm.Save(#"c:\button.png", ImageFormat.Png);
}
To write a texture2d to a file:
using (Stream stream = File.OpenWrite("picture.png"))
{
texture.SaveAsPng(stream, texture.Width, texture.Height);
}
To read a .png into a texture2d:
using(Stream stream = File.OpenRead("picture.png"))
{
texture = Texture2D.FromStream(GraphicsDevice, stream);
}