Quantcast
Channel: MediaSPIP
Viewing all 117670 articles
Browse latest View live

Why is FFMpeg outputting wrong NAL unit types? (javascript h264 livestream)

$
0
0

I am trying to set up a livestream in the browser using h264 encoding, in which javascript decodes the h264 frames and paints it on a Canvas-element (or using WebGL).

Both Broadway and Prism implement decoding NAL units of type 1, 5, 7, and 8.

My current setup is as follows:

  • FFMpeg outputs an MPEG-TS stream with h264 data
  • The stream is piped to netcat which listens on port 8084
  • A websocket server in NodeJS pipes data from port 8084 to clients on 8085
  • The jsmpeg library decodes MPEG-TS into separate NAL units
  • The separate NAL units are decoded by Broadway or Prism which outputs to a canvas

I am using this FFMpeg command:

ffmpeg -f v4l2 -i /dev/video0 -r 15 -c:v h264_nvenc -pix_fmt yuv420p -b:v 500k -profile:v baseline -tune zerolatency -f mpegts - | nc -l -p 8084 127.0.0.1

The problem is that the NAL units I'm getting are of type 9 (or maybe 6?), here is the header of one of the NAL units that javascript is receiving, in Base64 and binary formatting:

echo "AAAAAQnwAAAAAQYBBAAECBCAAAAAAWHg" | base64 -d | xxd -b
00000000: 00000000 00000000 00000000 00000001 00001001 11110000 ......
00000006: 00000000 00000000 00000000 00000001 00000110 00000001 ......
0000000c: 00000100 00000000 00000100 00001000 00010000 10000000 ......
00000012: 00000000 00000000 00000000 00000001 01100001 11100000 ....a.

Neither Broadway nor Prism supports these NAL unit types. How can I configure FFMpeg to only output NAL units of type 1, 5, 7, and 8?


avformat/hlsenc: improve compute after_init_list_dur

$
0
0
avformat/hlsenc: improve compute after_init_list_dur fix ticket: 7305
vs->sequence - hls->start_sequence - vs->nb_entries is the
after_init_list_dur fragment numbers
fix the wrong compute way vs->sequence - vs->nb_entries Signed-off-by: Steven Liu 
  • [DH] libavformat/hlsenc.c

FFMPEG: change input without stoping proccess

$
0
0

how i can change input on ffmpeg without stop process on linux Debian 9? im user decklink input and i need to change to file mp4 input.

ffmpeg -f decklink -i 'DeckLink Mini Recorder' -vf setpts=PTS-STARTPTS -pix_fmt uyvy422 -s 1920x1080 -r 25000/1000 -f decklink 'DeckLink Mini Monitor'

FFMPEG, "Could not open file.. ""av_interleaved_write_frame(): Input/output error"

$
0
0

Trying to run a simple ffmpeg line on my Ubuntu 16.04 vps shell,

ffmpeg -ss 1 -i /var/www/html/lib/vid/4581989_022810100718.mp4 -r 1 -vframes 1 /var/www/html/lib/thumbnail/4581989_022810100718/20.jpg

but am met with

Could not open file 

and

av_interleaved_write_frame(): Input/output error

I have already tried giving the ../lib/vid folder in question 777 permissions but it didnt help.

Full print-out:

ffmpeg version 2.8.14-0ubuntu0.16.04.1 Copyright (c) 2000-2018 the FFmpeg developers built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.9) 20160609 configuration: --prefix=/usr --extra-version=0ubuntu0.16.04.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping --disable-decoder=libopenjpeg --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi --enable-openal --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzmq --enable-frei0r --enable-libx264 --enable-libopencv
libavutil 54. 31.100 / 54. 31.100
libavcodec 56. 60.100 / 56. 60.100
libavformat 56. 40.101 / 56. 40.101
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 40.101 / 5. 40.101
libavresample 2. 1. 0 / 2. 1. 0
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 2.101 / 1. 2.101
libpostproc 53. 3.100 / 53. 3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/var/www/html/lib/vid/4581989_022810100718.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf56.36.100
Duration: 00:00:04.90, start: 0.000000, bitrate: 2595 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1820x1024 [SAR 1:1 DAR 455:256], 2593 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc (default)
Metadata: handler_name : VideoHandler
[swscaler @ 0x1652280] deprecated pixel format used, make sure you did set range correctly
Output #0, image2, to '/var/www/html/lib/thumbnail/4581989_022810100718/20.jpg':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf56.40.101
Stream #0:0(und): Video: mjpeg, yuvj420p(pc), 1820x1024 [SAR 1:1 DAR 455:256], q=2-31, 200 kb/s, 1 fps, 1 tbn, 1 tbc (default)
Metadata: handler_name : VideoHandler encoder : Lavc56.60.100 mjpeg
Stream mapping:
Stream #0:0 -> #0:0 (h264 (native) -> mjpeg (native))
Press [q] to stop, [?] for help
[image2 @ 0x16103a0] Could not open file : /var/www/html/lib/thumbnail/4581989_022810100718/20.jpg
av_interleaved_write_frame(): Input/output error
frame= 1 fps=0.0 q=3.4 Lsize=N/A time=00:00:01.00 bitrate=N/A video:48kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Conversion failed!

Reporting duplicated frames with FFmpeg

$
0
0

I am looking for a method to report (not just detect and remove) duplicated frames of video detected by FFmpeg - similar to how you can print out blackdetect, cropdetect, silencedetect, etc.

For example:

ffmpeg -i input.mp4 -vf blackdetect -an -f null - 2>&1 | grep blackdetect > output.txt

Outputs something like:

[blackdetect @ 0x7f8032f03680] black_start:5.00501 black_end:7.00701 black_duration:2.002

But there's no "dupedetect" filter as far as I know, so I'm looking for any ideas/workarounds to get a read of where frames are duplicated.

FFmpeg using complex filter amerge doesn't play on iOS

$
0
0

I am using 2 different FFmpeg commands to add audio to a video: This adds audio and replace video's existing audio:

ffmpeg -i "inputVideo.wmv" -i "inputAudio.mp3" -map 0:v -map 1:a -shortest -vcodec libx264 -preset ultrafast -crf 22 -pix_fmt yuv420p -r 30 "outputVideo.mp4"

It works fine.

The problem comes when I try to mix the new audio with the video's existing audio:

ffmpeg -i "inputVideo.wmv" -i "inputAudio.mp3" -filter_complex "[0:a][1:a]amerge=inputs=2[a]" -map 0:v -map "[a]" -shortest -vcodec libx264 -preset ultrafast -crf 22 -pix_fmt yuv420p -r 30 "outputVideo.mp4"

The video plays fine everywhere but on iOS. I've tried adding -profile:v main -level 3.1 and -profile:v baseline -level 3.1 but no luck either.

ffmpeg -i "inputVideo.wmv" -i "inputAudio.mp3" -filter_complex "[0:a][1:a]amerge=inputs=2[a]" -map 0:v -map "[a]" -shortest -vcodec libx264 -profile:v baseline -level 3.1 -preset ultrafast -crf 22 -pix_fmt yuv420p -r 30 "outputVideo.mp4"

WHat do I need to do to make the output video play on iOS?

ffmpeg - embed metadata that updates regularly in video

$
0
0

I have a video that was recorded with an ROV, or an underwater drone if you will. There video is stored in raw H.264, and lots of data is logged during a dive, like temperature, depth, pitch/roll/yaw, etc. Each log entry is timestamped with seconds since epoch.

Copying the raw H264 into an mp4 container at the correct framerate is easy, but I'd like to create a video that displays some or all of the metadata. I'd like to automate the process, so that I can come back from a trip and run a conversion batch tool that applies the metadata from new dives into the new video recordings. I'm hoping to avoid manual steps.

What are my options to display details on the video? Displaying text on the video is a common thing to do, but it isn't as clear to me how I could update it every few seconds based on an epoch timestamp from the logs. If I use -vf and try to use frame ranges for when to display each value, that'll be a very long filter. Would it help somehow if I generate an overlay video first? I don't see how that will be much easier, though.

Examples of some of the details I am hoping to embed are:

  • depth
  • temperature
  • pitch, roll and yaw, perhaps by using "sprites" that can be rotated based on the logged rotation around each axis

Here's a small sample of some of the logged data:

1531168847.588000000 depth: 5.7318845
1531168848.229000000 attitude.calc.roll: -178.67145730705903
1531168848.229000000 attitude.calc.pitch: 8.89832326120461
1531168848.598000000 pressure.fluid_pressure_: 1598.800048828125
1531168848.898000000 temp.water.temperature.temperature_: 13.180000305175781
1531168849.229000000 attitude.calc.roll: -177.03372656909875
1531168849.229000000 attitude.calc.pitch: 3.697970708364904
1531168849.605000000 pressure.fluid_pressure_: 1594.0999755859375
1531168850.235000000 attitude.calc.yaw: 19.87041354690573
1531168850.666000000 pressure.fluid_pressure_: 1593.300048828125

The various values are logged at fairly irregular intervals, and they are not all updated at the same time.

I can massage the data into any necessary format. I also have a timestamp (epoch based) of when each recording started, so I can calculate approximate frame numbers as necessary. I have been searching for ways to apply the epoch timestamp to the video (PTS, RTCTIME/RTCSTART), without luck so far. If I manage that, I imagine use of the enable filter might be easier, but I'm still not sure very very long video filters are the way to go.

Révision 24026: [Salvatore] [source:spip/ecrire/lang/ public] Export depuis http://trad.spip.net ...


How to download m3u8 file from websites like hotstar in php, js, nodejs [on hold]

$
0
0

I'm trying to download the m3u8 file from website's(Like hotstar, voot etc) video link using php or javascript or nodejs or python.

Please help. Thanks

FFMPEG: Fixing stutter in low-motion areas

$
0
0

I'm trying to create videos with a very specific handful of requirements using FFMPEG:

  • Must have a very low (ideally less than 0.5 seconds) keyframe rate
  • Must have a moderately low (~1Mbps) bitrate
  • Must run at a reasonable (~24fps) framerate
  • Must have a width multiple of 4
  • Must not have any B-frames
  • Must be H.264 baseline encoded
  • Must be FLV

Encoding speed is of no concern. If it takes 2 minutes to encode 1 second of video, that's absolutely fine. What matters is that the output retains quality at the lowest possible bitrate.

To this effect I currently have the following FFMPEG command:

ffmpeg \ -fflags +genpts \ -i big_buck_bunny_1080p_stereo.avi \ -vf "scale=trunc(360*iw/ih/4)*4:360" \ -vf "settb=1/1000" \ -r 24 \ -g 6 \ -keyint_min 6 \ -force_key_frames "expr:gte(t,n_forced/4)"\ -c:v libx264 \ -preset veryslow \ -tune zerolatency \ -profile:v baseline \ -pix_fmt yuv420p \ -b:v 1000k \ -c:a speex \ -ar 16000 \ -ac 1 \ -b:a 64k \ -f flv bbb_lo.flv

I wish to experiment with various encoding options (me_method, subq, etc) to see how they all affect quality and bitrate. Before that, though, I've got an immediate quality issue to address with the command above.

See the video here on YouTube

I've clipped just a portion of the video that really demonstrates the issue. When an area of the screen undergoes very slight changes in color, there are no motion vectors. This means that certain sections of the video go un-updated until the next keyframe. This can be seen strongly in the tree on the left at the beginning of the video or in the bunny while he's still asleep. If a viewer were staring at certain regions of the screen it may look like the video is only running at 4 frames per second (my keyframe rate) even though the video is actually running at 24 frames per second -- it just isn't updating the entire screen.

I'd be okay if these areas of the screen became heavily blurred so long as the motion is preserved. Doing a bit of research I thought that the options -flags2 -fastpskip would fix this, however this option is not working for me:

[libx264 @ 0x55b63e32c760] [Eval @ 0x7ffea2a7a830] Undefined constant or missing '(' in 'fastpskip'
[libx264 @ 0x55b63e32c760] Unable to parse option value "fastpskip"
[libx264 @ 0x55b63e32c760] Error setting option flags2 to value -fastpskip.

How can I fix this to preserve motion at the cost of image quality?

-- See the video here on YouTube

Anomalie #2381: Corriger le niveau d'intertitre

$
0
0

Je répète : la version de HTML n'a aucune incidence là-dessus, la gestion des titres restant la même en HTML5. Donc osef HTML5 ou pas.

b_b : que d'autres blocs de la page soient du même niveau que les intertitres du contenu reste toujours moins pire que de passer à côté du contenu.
nico d_ : un

n'a aucune valeur sémantique de séparation. À oublier.

Pour mémoire, sur les dommages causés et les correctifs possibles : http://romy.tetue.net/corriger-intertitre-SPIP

Automating FFmpeg/ multi-core support

$
0
0

I need help with FFmpeg/batch. I have a couple of large batches of images (+14000 files each batch, +5 MB each image, .TIFF all of them) and I'm stringing them together into a .mp4 video using FFmpeg.

The date in the metadata is broken because of the way they're stored upon creation, so the time and date (T/D) are on the file_name. I need each frame to have its respective T/D (so its File_Name) burnt onto them for accurate measurements (scientific purpose).

With the help of google and reddit, I've managed to semi-automate it like so:

Master.bat:

forfiles /p "D:\InputPath" /m "*.TIFF" /S /C "cmd /c C:\SlavePath\slave.bat @file @fname"

Slave.bat:

ffmpeg -i "%~1" -vf "drawtext=text=%~2: fontcolor=white: fontsize=30: fontfile='C\:\\FontPath\\OpenSans-Regular.ttf'""D:\OutputPath\mod_%~1"

Running Master.bat will output each individual image with the text burnt onto them and change the File_Name to mod_'File_name'.TIFF

Real example: 2018-06-05--16-00-01.0034.TIFF turns into mod_2018-06-05--16-00-01.0034.TIFF

The problem is that FFmpeg doesn't like it when my files have "--" in them ("date--time.miliseconds.TIFF") and doesn't like the miliseconds either, so I have to change the name of all files "manually" using Bulk Rename Utility (BRU). So, using BRU I rename all files to 00001.TIFF, 00002.TIFF, etc. and FFmpeg likes me again. It works great, but it means I can't be AFK.

After that, I have to go back to cmd and manually start the image to video conversion.

Also, FFmpeg doesn't seem to be using all cores.


I need help finding a way to:

  1. Change master.bat's output to 00001.TIFF etc. automatically in order of processing (i.e. first to be processed is 1.TIFF, 2nd is 2.TIFF)
  2. Add ffmpeg's img-to-vid function to the automating system
  3. Get my CPU to use all the cores effectively if possible. 2014/15 posts found on google make it seem as though FFmpeg doesn't support multi-core or hyperthreading.

64bit Windows, i7 7700hq, gtx 1050 4Gb, C: SSD, D: HDD

Adding Gapless Playback information to AAC

$
0
0

im currently trying to develop an Video / Audio encoding pipline. My goal is it to encode mp4 files containing an h264 video track and an AAC audio Track. These files should be played one after another without any gaps in between.

Currently im converting the videos with ffmpeg. Unfortunately my input files are missing the gapless playback metadata, which will be needed for gapless playback of the AAC track.

Infact im looking for a way to add the iTunSMPBudta comment, as it is needed by the Exoplayer. (See Parser for Details: GaplessInfoHolder.java )

I could not find a way to add this via ffmpeg ( ffmpeg AAC encoder doc), did i maybe missed something?

Even Wikipedia only lists two converters that should be able to do that: Nero Digital and Itunes. But this infomation could be outdated.

Do anyone of you know a java library or (linux) command that can add this metadata to an mp4 file?

I hope some of you might be able to help me. Thank you.

-- GaplessInfoHolder.java, ffmpeg AAC encoder doc, Wikipedia

Concatenating 2 videos with ffmpeg using ffmpy

$
0
0

I am trying to concatenate 2 videos but my ffmpeg command must be wrong. The output is only the second video video2.avi.

from ffmpy import FFmpeg
ff = FFmpeg(inputs={'video1.avi': None, 'video2.avi': None}, outputs={'output.avi': None })
ff.cmd
'ffmpeg -f concat -i video1.avi -i video2.avi output.avi'
ff.run()

Android studio + OpenCV + FFmpeg

$
0
0

I have problem with code, which is functional only for Genymotion device (Android 4.1.1), but for Genymotion device 5.0.1 and real device Huawei honor 4c Android 4.4.2 not.

I have imported OpenCV 3.1 to Android studio by: https://stackoverflow.com/a/27421494/4244605
I added JavaCV with FFmpeg by: https://github.com/bytedeco/javacv

Android studio 1.5.1
minSdkVersion 15
compileSdkVersion 23

Code is only for test.
OpenCVCameraActivity.java:

import android.app.Activity;
import android.hardware.Camera;
import android.media.AudioFormat;
import android.media.AudioRecord;
import android.media.MediaRecorder;
import android.os.Bundle;
import android.os.Environment;
import android.util.Log;
import android.view.Menu;
import android.view.MenuItem;
import android.view.MotionEvent;
import android.view.SubMenu;
import android.view.SurfaceView;
import android.view.View;
import android.view.WindowManager;
import android.widget.Toast; import org.bytedeco.javacv.FFmpegFrameRecorder;
import org.bytedeco.javacv.Frame;
import org.opencv.android.BaseLoaderCallback;
import org.opencv.android.CameraBridgeViewBase;
import org.opencv.android.LoaderCallbackInterface;
import org.opencv.android.OpenCVLoader;
import org.opencv.core.Mat; import java.io.File;
import java.nio.ShortBuffer;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.List;
import java.util.ListIterator; @SuppressWarnings("ALL")
public class OpenCVCameraActivity extends Activity implements CameraBridgeViewBase.CvCameraViewListener2, View.OnTouchListener { //name of activity, for DEBUGGING private static final String TAG = OpenCVCameraActivity.class.getSimpleName(); private OpenCVCameraPreview mOpenCvCameraView; private List mResolutionList; private MenuItem[] mEffectMenuItems; private SubMenu mColorEffectsMenu; private MenuItem[] mResolutionMenuItems; private SubMenu mResolutionMenu; private static long frameCounter = 0; long startTime = 0; private Mat edgesMat; boolean recording = false; private int sampleAudioRateInHz = 44100; private int imageWidth = 1920; private int imageHeight = 1080; private int frameRate = 30; private Frame yuvImage = null; private File ffmpeg_link; private FFmpegFrameRecorder recorder; /* audio data getting thread */ private AudioRecord audioRecord; private AudioRecordRunnable audioRecordRunnable; private Thread audioThread; volatile boolean runAudioThread = true; ShortBuffer[] samples; private BaseLoaderCallback mLoaderCallback = new BaseLoaderCallback(this) { @Override public void onManagerConnected(int status) { switch (status) { case LoaderCallbackInterface.SUCCESS: Log.i(TAG, "OpenCV loaded successfully"); mOpenCvCameraView.enableView(); mOpenCvCameraView.setOnTouchListener(OpenCVCameraActivity.this); break; default: super.onManagerConnected(status); break; } } }; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); if(Static.DEBUG) Log.i(TAG, "onCreate()"); getWindow().addFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON); try { setContentView(R.layout.activity_opencv); mOpenCvCameraView = (OpenCVCameraPreview) findViewById(R.id.openCVCameraPreview); mOpenCvCameraView.setVisibility(SurfaceView.VISIBLE); mOpenCvCameraView.setCvCameraViewListener(this); //mOpenCvCameraView.enableFpsMeter(); ffmpeg_link = new File(Environment.getExternalStorageDirectory(), "stream.mp4"); } catch (Exception e){ e.printStackTrace(); } } @Override protected void onRestart() { if (Static.DEBUG) Log.i(TAG, "onRestart()"); super.onRestart(); } @Override protected void onStart() { if (Static.DEBUG) Log.i(TAG, "onStart()"); super.onStart(); } @Override protected void onResume() { if (Static.DEBUG) Log.i(TAG, "onResume()"); super.onResume(); if (!OpenCVLoader.initDebug()) { Log.i(TAG, "Internal OpenCV library not found. Using OpenCV Manager for initialization"); OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_2_4_11, this, mLoaderCallback); } else { Log.i(TAG, "OpenCV library found inside package. Using it!"); mLoaderCallback.onManagerConnected(LoaderCallbackInterface.SUCCESS); } } @Override public boolean onCreateOptionsMenu(Menu menu) { if (Static.DEBUG) Log.i(TAG, "onCreateOptionsMenu()"); super.onCreateOptionsMenu(menu); List effects = mOpenCvCameraView.getEffectList(); if (effects == null) { Log.e(TAG, "Color effects are not supported by device!"); return true; } mColorEffectsMenu = menu.addSubMenu("Color Effect"); mEffectMenuItems = new MenuItem[effects.size()]; int idx = 0; ListIterator effectItr = effects.listIterator(); while(effectItr.hasNext()) { String element = effectItr.next(); mEffectMenuItems[idx] = mColorEffectsMenu.add(1, idx, Menu.NONE, element); idx++; } mResolutionMenu = menu.addSubMenu("Resolution"); mResolutionList = mOpenCvCameraView.getResolutionList(); mResolutionMenuItems = new MenuItem[mResolutionList.size()]; ListIterator resolutionItr = mResolutionList.listIterator(); idx = 0; while(resolutionItr.hasNext()) { Camera.Size element = resolutionItr.next(); mResolutionMenuItems[idx] = mResolutionMenu.add(2, idx, Menu.NONE, Integer.valueOf(element.width).toString() + "x" + Integer.valueOf(element.height).toString()); idx++; } return true; } @Override protected void onPause() { if (Static.DEBUG) Log.i(TAG, "onPause()"); super.onPause(); if (mOpenCvCameraView != null) mOpenCvCameraView.disableView(); } @Override protected void onStop() { if (Static.DEBUG) Log.i(TAG, "onStop()"); super.onStop(); } @Override protected void onDestroy() { if (Static.DEBUG) Log.i(TAG, "onDestroy()"); super.onDestroy(); if (mOpenCvCameraView != null) mOpenCvCameraView.disableView(); } public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) { ++frameCounter; //Log.i(TAG, "Frame number: "+frameCounter); return inputFrame.rgba(); } @Override public void onCameraViewStarted(int width, int height) { edgesMat = new Mat(); } @Override public void onCameraViewStopped() { if (edgesMat != null) edgesMat.release(); edgesMat = null; } public boolean onOptionsItemSelected(MenuItem item) { Log.i(TAG, "called onOptionsItemSelected; selected item: " + item); if (item.getGroupId() == 1) { mOpenCvCameraView.setEffect((String) item.getTitle()); Toast.makeText(this, mOpenCvCameraView.getEffect(), Toast.LENGTH_SHORT).show(); } else if (item.getGroupId() == 2) { int id = item.getItemId(); Camera.Size resolution = mResolutionList.get(id); mOpenCvCameraView.setResolution(resolution); resolution = mOpenCvCameraView.getResolution(); String caption = Integer.valueOf(resolution.width).toString() + "x" + Integer.valueOf(resolution.height).toString(); Toast.makeText(this, caption, Toast.LENGTH_SHORT).show(); } return true; } @Override public boolean onTouch(View v, MotionEvent event) { Log.i(TAG,"onTouch event"); SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd_HH-mm-ss"); String currentDateandTime = sdf.format(new Date()); String fileName = Environment.getExternalStorageDirectory().getPath() + "/sample_picture_" + currentDateandTime + ".jpg"; mOpenCvCameraView.takePicture(fileName); Toast.makeText(this, fileName + " saved", Toast.LENGTH_SHORT).show(); return false; } /** * Click to ImageButton to start recording. */ public void onClickBtnStartRecord2(View v) { if (Static.DEBUG) Log.i(TAG, "onClickBtnStartRecord()"); if(!recording) startRecording(); else stopRecording(); } private void startRecording() { if (Static.DEBUG) Log.i(TAG, "startRecording()"); initRecorder(); try { recorder.start(); startTime = System.currentTimeMillis(); recording = true; audioThread.start(); } catch(FFmpegFrameRecorder.Exception e) { e.printStackTrace(); } } private void stopRecording() { if (Static.DEBUG) Log.i(TAG, "stopRecording()"); runAudioThread = false; try { audioThread.join(); } catch(InterruptedException e) { e.printStackTrace(); } audioRecordRunnable = null; audioThread = null; if(recorder != null && recording) { recording = false; Log.v(TAG, "Finishing recording, calling stop and release on recorder"); try { recorder.stop(); recorder.release(); } catch(FFmpegFrameRecorder.Exception e) { e.printStackTrace(); } recorder = null; } } //--------------------------------------- // initialize ffmpeg_recorder //--------------------------------------- private void initRecorder() { Log.w(TAG, "init recorder"); try { if (yuvImage == null) { yuvImage = new Frame(imageWidth, imageHeight, Frame.DEPTH_UBYTE, 2); Log.i(TAG, "create yuvImage"); } Log.i(TAG, "ffmpeg_url: " + ffmpeg_link.getAbsolutePath()); Log.i(TAG, "ffmpeg_url: " + ffmpeg_link.exists()); recorder = new FFmpegFrameRecorder(ffmpeg_link, imageWidth, imageHeight, 1); recorder.setFormat("mp4"); recorder.setSampleRate(sampleAudioRateInHz); // Set in the surface changed method recorder.setFrameRate(frameRate); Log.i(TAG, "recorder initialize success"); audioRecordRunnable = new AudioRecordRunnable(); audioThread = new Thread(audioRecordRunnable); runAudioThread = true; } catch (Exception e){ e.printStackTrace(); } } //--------------------------------------------- // audio thread, gets and encodes audio data //--------------------------------------------- class AudioRecordRunnable implements Runnable { @Override public void run() { android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO); // Audio int bufferSize; ShortBuffer audioData; int bufferReadResult; bufferSize = AudioRecord.getMinBufferSize(sampleAudioRateInHz, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT); audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, sampleAudioRateInHz, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, bufferSize); audioData = ShortBuffer.allocate(bufferSize); Log.d(TAG, "audioRecord.startRecording()"); audioRecord.startRecording(); /* ffmpeg_audio encoding loop */ while(runAudioThread) { //Log.v(TAG,"recording? " + recording); bufferReadResult = audioRecord.read(audioData.array(), 0, audioData.capacity()); audioData.limit(bufferReadResult); if(bufferReadResult > 0) { Log.v(TAG, "bufferReadResult: " + bufferReadResult); // If "recording" isn't true when start this thread, it never get's set according to this if statement...!!! // Why? Good question... if(recording) { try { recorder.recordSamples(audioData); //Log.v(TAG,"recording " + 1024*i + " to " + 1024*i+1024); } catch(FFmpegFrameRecorder.Exception e) { Log.v(TAG, e.getMessage()); e.printStackTrace(); } } } } Log.v(TAG, "AudioThread Finished, release audioRecord"); /* encoding finish, release recorder */ if(audioRecord != null) { audioRecord.stop(); audioRecord.release(); audioRecord = null; Log.v(TAG, "audioRecord released"); } } }
}

OpenCVCameraPreview.java:

import android.content.Context;
import android.hardware.Camera;
import android.util.AttributeSet;
import android.util.Log; import org.opencv.android.JavaCameraView; import java.io.FileOutputStream;
import java.util.List; public class OpenCVCameraPreview extends JavaCameraView implements Camera.PictureCallback { private static final String TAG = OpenCVCameraPreview.class.getSimpleName(); private String mPictureFileName; public OpenCVCameraPreview(Context context, AttributeSet attrs) { super(context, attrs); } public List getEffectList() { return mCamera.getParameters().getSupportedColorEffects(); } public boolean isEffectSupported() { return (mCamera.getParameters().getColorEffect() != null); } public String getEffect() { return mCamera.getParameters().getColorEffect(); } public void setEffect(String effect) { Camera.Parameters params = mCamera.getParameters(); params.setColorEffect(effect); mCamera.setParameters(params); } public List getResolutionList() { return mCamera.getParameters().getSupportedPreviewSizes(); } public void setResolution(Camera.Size resolution) { disconnectCamera(); mMaxHeight = resolution.height; mMaxWidth = resolution.width; connectCamera(getWidth(), getHeight()); } public Camera.Size getResolution() { return mCamera.getParameters().getPreviewSize(); } public void takePicture(final String fileName) { Log.i(TAG, "Taking picture"); this.mPictureFileName = fileName; // Postview and jpeg are sent in the same buffers if the queue is not empty when performing a capture. // Clear up buffers to avoid mCamera.takePicture to be stuck because of a memory issue mCamera.setPreviewCallback(null); // PictureCallback is implemented by the current class mCamera.takePicture(null, null, this); } @Override public void onPictureTaken(byte[] data, Camera camera) { Log.i(TAG, "Saving a bitmap to file"); // The camera preview was automatically stopped. Start it again. mCamera.startPreview(); mCamera.setPreviewCallback(this); // Write the image in a file (in jpeg format) try { FileOutputStream fos = new FileOutputStream(mPictureFileName); fos.write(data); fos.close(); } catch (java.io.IOException e) { Log.e("PictureDemo", "Exception in photoCallback", e); } }
}

Gradle:

apply plugin: 'com.android.application' android { compileSdkVersion 23 buildToolsVersion "23.0.2" defaultConfig { applicationId "co.example.example" minSdkVersion 15 targetSdkVersion 23 versionCode 1 versionName "1.0" } buildTypes { release { minifyEnabled false proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro' } } packagingOptions { exclude 'META-INF/maven/org.bytedeco.javacpp-presets/opencv/pom.properties' exclude 'META-INF/maven/org.bytedeco.javacpp-presets/opencv/pom.xml' exclude 'META-INF/maven/org.bytedeco.javacpp-presets/ffmpeg/pom.properties' exclude 'META-INF/maven/org.bytedeco.javacpp-presets/ffmpeg/pom.xml' }
} repositories { mavenCentral()
} dependencies { compile fileTree(include: ['*.jar'], dir: 'libs') testCompile 'junit:junit:4.12' compile 'com.android.support:appcompat-v7:23.1.1' compile 'com.google.android.gms:play-services-appindexing:8.1.0' compile group: 'org.bytedeco', name: 'javacv', version: '1.1' compile group: 'org.bytedeco.javacpp-presets', name: 'opencv', version: '3.0.0-1.1', classifier: 'android-arm' compile group: 'org.bytedeco.javacpp-presets', name: 'opencv', version: '3.0.0-1.1', classifier: 'android-x86' compile group: 'org.bytedeco.javacpp-presets', name: 'ffmpeg', version: '2.8.1-1.1', classifier: 'android-arm' compile group: 'org.bytedeco.javacpp-presets', name: 'ffmpeg', version: '2.8.1-1.1', classifier: 'android-x86' compile project(':openCVLibrary310')
}

proguard-rules.pro Edited by: link

jniLibs: app/src/main/jniLibs:

armeabi armeabi-v7a arm64-v8a mips mips64 x86 x86_64

Problem

02-19 11:57:37.684 1759-1759/ I/OpenCVCameraActivity: onClickBtnStartRecord()
02-19 11:57:37.684 1759-1759/ I/OpenCVCameraActivity: startRecording()
02-19 11:57:37.684 1759-1759/ W/OpenCVCameraActivity: init recorder
02-19 11:57:37.691 1759-1759/ I/OpenCVCameraActivity: create yuvImage
02-19 11:57:37.691 1759-1759/ I/OpenCVCameraActivity: ffmpeg_url: /storage/emulated/0/stream.mp4
02-19 11:57:37.696 1759-1759/ I/OpenCVCameraActivity: ffmpeg_url: false
02-19 11:57:37.837 1759-1759/ W/linker: libjniavutil.so: unused DT entry: type 0x1d arg 0x18cc3
02-19 11:57:37.837 1759-1759/ W/linker: libjniavutil.so: unused DT entry: type 0x6ffffffe arg 0x21c30
02-19 11:57:37.837 1759-1759/ W/linker: libjniavutil.so: unused DT entry: type 0x6fffffff arg 0x1
02-19 11:57:37.838 1759-1759/co.example.example E/art: dlopen("/data/app/co.example.example-2/lib/x86/libjniavutil.so", RTLD_LAZY) failed: dlopen failed: cannot locate symbol "av_version_info" referenced by "libjniavutil.so"...
02-19 11:57:37.843 1759-1759/co.example.example I/art: Rejecting re-init on previously-failed class java.lang.Class
02-19 11:57:37.844 1759-1759/co.example.example E/AndroidRuntime: FATAL EXCEPTION: main Process: co.example.example, PID: 1759 java.lang.IllegalStateException: Could not execute method of the activity at android.view.View$1.onClick(View.java:4020) at android.view.View.performClick(View.java:4780) at android.view.View$PerformClick.run(View.java:19866) at android.os.Handler.handleCallback(Handler.java:739) at android.os.Handler.dispatchMessage(Handler.java:95) at android.os.Looper.loop(Looper.java:135) at android.app.ActivityThread.main(ActivityThread.java:5254) at java.lang.reflect.Method.invoke(Native Method) at java.lang.reflect.Method.invoke(Method.java:372) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:903) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:698) Caused by: java.lang.reflect.InvocationTargetException at java.lang.reflect.Method.invoke(Native Method) at java.lang.reflect.Method.invoke(Method.java:372) at android.view.View$1.onClick(View.java:4015) at android.view.View.performClick(View.java:4780) at android.view.View$PerformClick.run(View.java:19866) at android.os.Handler.handleCallback(Handler.java:739) at android.os.Handler.dispatchMessage(Handler.java:95) at android.os.Looper.loop(Looper.java:135) at android.app.ActivityThread.main(ActivityThread.java:5254) at java.lang.reflect.Method.invoke(Native Method) at java.lang.reflect.Method.invoke(Method.java:372) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:903) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:698) Caused by: java.lang.UnsatisfiedLinkError: org.bytedeco.javacpp.avutil at java.lang.Class.classForName(Native Method) at java.lang.Class.forName(Class.java:309) at org.bytedeco.javacpp.Loader.load(Loader.java:413) at org.bytedeco.javacpp.Loader.load(Loader.java:381) at org.bytedeco.javacpp.avcodec$AVPacket.(avcodec.java:1650) at org.bytedeco.javacv.FFmpegFrameRecorder.(FFmpegFrameRecorder.java:149) at org.bytedeco.javacv.FFmpegFrameRecorder.(FFmpegFrameRecorder.java:129) at co.example.example.OpenCVCameraActivity.initRecorder(OpenCVCameraActivity.java:320) at co.example.example.OpenCVCameraActivity.startRecording(OpenCVCameraActivity.java:266) at co.example.example.OpenCVCameraActivity.onClickBtnStartRecord2(OpenCVCameraActivity.java:259) at java.lang.reflect.Method.invoke(Native Method) at java.lang.reflect.Method.invoke(Method.java:372) at android.view.View$1.onClick(View.java:4015) at android.view.View.performClick(View.java:4780) at android.view.View$PerformClick.run(View.java:19866) at android.os.Handler.handleCallback(Handler.java:739) at android.os.Handler.dispatchMessage(Handler.java:95) at android.os.Looper.loop(Looper.java:135) at android.app.ActivityThread.main(ActivityThread.java:5254) at java.lang.reflect.Method.invoke(Native Method) at java.lang.reflect.Method.invoke(Method.java:372) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:903) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:698) 
-- https://github.com/bytedeco/javacv, link

Ffmpeg streaming from capturing device Osprey 450e fails

$
0
0

i want to encode live stream video with ffmpeg capturing from Directshow card(Osprey Card 450e) to mp4 streaming multicast. For the moment i have this error.

ffmpeg -f dshow -i video="Osprey-450e Video Device 1A":audio="Osprey-450e Audio Device 1A" -f mpegts -b:v 5120k -r 30 -c:v mpeg2video -c:a ac3 -b:a 256k udp://239.192.42.61:1234 [dshow @ 02c7f640] Could not run filter video=Osprey-450e Video Device 1A:Audio?Osprey-450e Audio Device 1A: Input/output error

Ffmpeg can encode a Directshow input?

Compile ffmpeg with x265 and fdk-aac on Minibian - dependencies not found

$
0
0

I am trying to get a live streaming device to work on a raspberry pi. I am running minibian. I roughly follow this guide without the cross compiling.

My problem is probably with the compilation of ffmpeg. I downloaded and compiled both x265 and fdk-aac and compiled them. Next I have to compile ffmpeg, which is in the same folder as the other ones, but the compiler can't find any of the dependencies. x265 also cannot be found using pkg-config, which is the error it produces when I try to run ./configure.

I directly cloned everything into one folder, so that in a folder called ffmpeg_files there are three other folders: ffmpeg, fdk-aac and x265. How do I properly include these dependencies so I can enable them when I compile ffmpeg?

Thank you!

-- minibian, this guide

PHP ssh2 and ffmpeg return PID

$
0
0

I'm trying to execute an ffmpeg command on a remote server from PHP, the ffmpeg command will start a video so, this is the command I use on the server and works good:

go-play ~/demo.mp4 rtmp://rtmp.mysite.com/XXX

Where go-play is a function in a file and it says:

#!/bin/bash function go-play(){ ffmpeg -re -i $1 -c:v libx264 -framerate 30 -preset veryfast -profile:v high -level 4.1 -crf 18 -pix_fmt yuv420p -g 60 -keyint_min 60 -sc_threshold 0 -c:a aac -b:a 128k -ac 2 -ar 44100 -f flv "$2"
}

The file can be minutes long so I have to get the PID of this process just after the execution.

There can be more processes going like this starting from different sources, so I'll have to work out a way to grepthe PID correctly.

My current idea was to ssh -f so that the ssh command execution goes in background and I can return the PIDand this is the command I'm trying:

ssh -f -p 20 localhost go-play ~/demo.mp4 rtmp://rtmp.mysite.com/XXX & ps aux | grep "[s]sh.*-f"

And this is the PHP ssh2 part:

$config = [];
$config['IP'] = "1.1.1.1";
$config['port'] = 21;
$config['user'] = 'root';
$config['password'] = 'mypass'; $connection = ssh2_connect($config['IP'], $config['port']);
ssh2_auth_password($connection, $config['user'], $config['password']); $cmd = 'ssh -f -p 20 localhost go-play ~/demo.mp4 rtmp://rtmp.mysite.com/XXX & ps aux | grep "[s]sh.*-f"'; $stdout = ssh2_exec($connection, $cmd);
$stderr = ssh2_fetch_stream($stdout, SSH2_STREAM_STDERR);
if (!empty($stdout)) { $t0 = time(); $err_buf = null; $out_buf = null; // Try for 30s do { $err_buf .= fread($stderr, 4096); $out_buf .= fread($stdout, 4096); $done = 0; if (feof($stderr)) { $done++; } if (feof($stdout)) { $done++; } $t1 = time(); $span = $t1 - $t0; // Wait here so we don't hammer in this loop sleep(1); } while (($span < 30) && ($done < 2)); if ($err_buf) { echo $err_buf; } if ($out_buf) { echo $out_buf; } } else { echo "Failed to Shell\n";
}

Run linphone on android simulator - Unable to load optional library ffmpeg-linphone

$
0
0

I success prepare and make build. but still no done with deploying on device or simulator.

Run on target 21 .

Error message :

W/FactoryImpl: Unable to load optional library ffmpeg-linphone: dalvik.system.PathClassLoader[DexPathList[[zip file "/data/app/org.linphone-1/base.apk", zip file "/data/app/org.linphone-1/split_lib_dependencies_apk.apk", zip file "/data/app/org.linphone-1/split_lib_slice_0_apk.apk", zip file "/data/app/org.linphone-1/split_lib_slice_1_apk.apk", zip file "/data/app/org.linphone-1/split_lib_slice_2_apk.apk", zip file "/data/app/org.linphone-1/split_lib_slice_3_apk.apk", zip file "/data/app/org.linphone-1/split_lib_slice_4_apk.apk", zip file "/data/app/org.linphone-1/split_lib_slice_5_apk.apk", zip file "/data/app/org.linphone-1/split_lib_slice_6_apk.apk", zip file "/data/app/org.linphone-1/split_lib_slice_7_apk.apk", zip file "/data/app/org.linphone-1/split_lib_slice_8_apk.apk", zip file "/data/app/org.linphone-1/split_lib_slice_9_apk.apk"],nativeLibraryDirectories=[/data/app/org.linphone-1/lib/arm, /vendor/lib, /system/lib]]] couldn't find "libffmpeg-linphone.so" A/libc: Fatal signal 4 (SIGILL), code 1, fault addr 0xa44d5aba in tid 4306 (org.linphone)

Revision 111092: Traiter les conditions @checkbox@ == ou @checkbox@ IN au niveau de ...

$
0
0
Traiter les conditions @checkbox@ == ou @checkbox@ IN au niveau de l'affichage aussi, pas que de la vérification, en étendant la regexp. Merci Anne-Marie pour le signalement :) -- Log
Viewing all 117670 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>