lavc/psd: Remove an uninitialized variable.
Anomalie #3885 (Nouveau): ajouter un filtre tenter_unserialize
Dans formidable, on a un filtre
/** * Tente de déserialiser un texte * * Si le paramètre est un tableau, retourne le tableau, * Si c'est une chaîne, tente de la désérialiser, sinon * retourne la chaîne. * * @filtre * * @param string|array $texte * Le texte (possiblement sérializé) ou un tableau * @return array|string * Tableau, texte désérializé ou texte **/ function filtre_tenter_unserialize_dist($texte) { if (is_array($texte)) { return $texte; } if ($tmp = @unserialize($texte)) { return $tmp; } return $texte; }
cela pourrait être bien en natif non? me prévenir si on integre pour éviter les erreurs de duplicata.
Converting videos in Unity at runtime -> MonoPosixHelper missing
I want to do something Unity seems not be able to: converting videos at runtime to use them as MovieTextures in my resulting program, since Unity only can work with videos in .ogg format. For that I use the NReco lib, which, in itself, seems to do what I want. Sadly, it also uses compression which, for some reason, is not supported by Unity, I guess?
Here is my Code:
using (FileStream file = new FileStream(Path.Combine(filename, FileMode.Create, FileAccess.Write)) { var output = new MemoryStream(); var ffMpeg = new FFMpegConverter(); ffMpeg.ConvertMedia(filePath, output, Format.ogg); output.Seek(0, SeekOrigin.Begin); byte[] arr = new byte[output.Length]; output.Read(arr, 0, (int)output.Length); file.Write(arr, 0, arr.Length); output.Close(); }
But then, ffMpeg.ConvertMedia
results in the following exception:
NReco.VideoConverter.FFMpegConverter.EnsureFFMpegLibs ()
NReco.VideoConverter.FFMpegConverter.ConvertMedia (NReco.VideoConverter.Media input, NReco.VideoConverter.Media output, NReco.VideoConverter.ConvertSettings settings)
NReco.VideoConverter.FFMpegConverter.ConvertMedia (System.String inputFile, System.String inputFormat, System.IO.Stream outputStream, System.String outputFormat, NReco.VideoConverter.ConvertSettings settings)
NReco.VideoConverter.FFMpegConverter.ConvertMedia (System.String inputFile, System.IO.Stream outputStream, System.String outputFormat)
As I read, it is intended from Unity to not make MonoPosixHelper available by default. So, is there anything I can do? Any workaround? Any magical spell? Or am I missing something elementary here?
Thanks in advance.
java.lang.UnsatisfiedLinkError: JNI_ERR returned from JNI_OnLoad in "lib/arm/liblsdisplay.so"
i have create video reverse application using lansoSDK and i load the jniLib folder lib in class than "java.lang.UnsatisfiedLinkError: JNI_ERR returned from JNI_OnLoad" error occur please any can soleved my problem.
12-30 17:50:34.797 20641-20641/com.aspiration.gifmaker E/AndroidRuntime: FATAL EXCEPTION: main Process: com.aspiration.gifmaker, PID: 20641 java.lang.UnsatisfiedLinkError: JNI_ERR returned from JNI_OnLoad in "/data/app/com.aspiration.gifmaker-2/lib/arm/liblsdisplay.so" at java.lang.Runtime.loadLibrary(Runtime.java:372) at java.lang.System.loadLibrary(System.java:1076) at com.aspiration.gifmaker.videoeditor.LoadLanSongSdk.loadLibraries(LoadLanSongSdk.java:13) at com.aspiration.gifmaker.commonDemo.DemoActivity.onCreate(DemoActivity.java:41) at android.app.Activity.performCreate(Activity.java:6303) at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1108) at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2376) at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2483) at android.app.ActivityThread.access$900(ActivityThread.java:153) at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1349) at android.os.Handler.dispatchMessage(Handler.java:102) at android.os.Looper.loop(Looper.java:148) at android.app.ActivityThread.main(ActivityThread.java:5441) at java.lang.reflect.Method.invoke(Native Method) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:738) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:628)
my lib is
System.loadLibrary("ffmpegeditor"); System.loadLibrary("lsdisplay"); System.loadLibrary("lsplayer");
ffmpeg dump specific length of video after seeking PICT_TYPE_I keyframe
I have a flv video and want to dump let's say 3s
length of the video after first keyframe PICT_TYPE_I
meet after 00:39
. I ready the document of ffmpeg seeking
and quote here
ffmpeg -ss 00:23:00 -i Mononoke.Hime.mkv -frames:v 1 out1.jpg
This example will produce one image frame (out1.jpg) at the twenty-third minute from the beginning of the movie. The input will be parsed using keyframes, which is very fast. As of FFmpeg 2.1, when transcoding with ffmpeg (i.e. not just stream copying), -ss is now also "frame-accurate" even when used as an input option. Previous behavior (seeking only to the nearest preceding keyframe, even if not precisely accurate) can be restored with the -noaccurate_seek option.
So I think if I use this command (put -ss
before -i
)
ffmpeg -noaccurate_seek -ss 00:39 -i input.flv -r 10 -s 720x400 -t 3.12 dump.flv
And this should dump a video that last 3.12s
and begin with the first keyframe
after 00:39
right? after all, this is what I need.
But the result is dump.flv
not start with a keyframe
, ie a PICT_TYPE_I
frame.
I know I could find all keyframe
start time with ffprobe
and re-calculate the -ss
seeking time to achieve this. But is there a better way?
configure: Check build with some header not just preprocessing for testing --std=c11
configure: Fix typo in incdir variable written to config.sh
Use a MediaStreamSource as the Source for a WPF MediaElement
I am trying to use the FFmpegInterop library in my WPF application. The problem I am having is that I cant find a way to use a MediaStreamSource as the source for a MediaElement. Any ideas would be greatly appreciated.
Thanks,
Dom
-- FFmpegInteropHow to convert mp4 to h264 adding AUDs using FFMPEG
I am trying to convert an mp4 clip to h264 bytestream format using FFMPEG. I have successfully compiled FFMPEG from source with access to libx264.
Looking at the documentation for FFMPEG (version 3.2.2) under libx264 there is a bool flag -aud. In the online documentation it gives the example
ffmpeg -i input.flac -id3v2_version 3 out.mp3
Using this format, the following command works, but doesn't produce the desired AUDs in the output file:
ffmpeg -i input.mp4 -codec:v libx264 -aud 1 output.h264
I've also tried different variants with this including:
ffmpeg -i input.mp4 -vcodec libx264 -aud 1 output.h264
ffmpeg -i input.mp4 -aud 1 output.h264
etc.
I assume there is something I'm misunderstanding about performing this operation. I basically want to take a h264 movie in an mp4 container and dump it as an h264 stream with AUDs added to it. Any idea why this isn't working?
(I've also tried using x264 with the -aud flag, also ran fine but didn't produce the desired output).
-- online documentationAndroid encode video with ffmpeg while it is still recording
I want to develop an android aplication which allows me to continuously record a video and upload parts of the video to a server without stopping the recording. It is crucial for the application that I can record up to 60 min without stopping the video.
Initial approach
Application consits of two parts:
- MediaRecorder which records a video continuously from the camera.
- Cutter/Copy - Part: While the video is recorded I have to take out certain segments and send them to a server.
This part was implemented using http://ffmpeg4android.netcompss.com/
libffmpeg.so. I used their VideoKit Wrapper which allows me to directly run ffmpeg with any params I need.
My Problem
I tried the ffmpeg command with the params
ffmpeg -ss 00:00:03 -i -t 00:00:05 -vcodec copy -acodec copy
which worked great for me as long as Android's MediaRecorder finished recording.
When I execute the same command, while the MediaRecorder is recording the file, ffmpeg exits with the error message "Operation not permitted".
- I think that the error message doesn't mean that android prevents the access to the file. I think that ffmpeg needs the "moov-atoms" to find the proper position in the video.
For that reason I thought of other approaches (which don't need the moov-atom):
- Create a rtsp stream with android and access the rtsp stream later. The problem is that to my knowledge android SDK doesn't support the recording to a rtsp stream.
- Maybe it is possible to access the camera directly with ffmpeg (/dev/video0 seems to be a video device?!)
- I read about webm as an alternative for streaming, maybe android can record webm streams?!
TLDR: Too long didn't read:
I want to access a video file with ffmpeg (libffmpeg.so) while it is recording. Fffmpeg exits with the error message "Operation not permitted"
Goal:
My goal is to record a video (and audio) and take parts of the video while it is still recording and upload them to the server.
Maybe you can help me solve the probelm or you have other ideas on how to approach my problem.
Thanks a lot in advance.
-- http://ffmpeg4android.netcompss.com/You must have Ffmpeg installed in order to use this function? with installed ffmpeg?
I am trying to convert youtube to mp3 with this method https://github.com/eyecatchup/php-yt_downloader but when I run in localhost I am getting this error?
I have installed it
I am very sure I have set up the path properly
what am i doing wrong?
-- https://github.com/eyecatchup/php-yt_downloader,


Including ffmpeg library in AWS Lambda
I'm trying to include the ffmpeg
library with AWS.
drwxrwxrwx 2 root root 0 Dec 22 13:04 bin -rwxrwxrwx 1 root root 40166912 Dec 22 11:50 ffmpeg.exe -rwxrwxrwx 1 root root 30 Dec 22 13:04 version.sh drwxrwxrwx 2 root root 0 Dec 22 16:35 node_modules
-rwxrwxrwx 1 root root 594 Dec 22 13:03 package.json
-rwxrwxrwx 1 root root 818 Dec 30 11:04 SplitFrames.js
Below is what's in the main js file, SplitFrames.js
var execute = require('lambduh-execute');
var validate = require('lambduh-validate'); process.env['PATH'] = process.env['PATH'] + ':/tmp/:' + process.env['LAMBDA_TASK_ROOT'] exports.handler = function(event, context, callback) { var exec = require('child_process').exec; var cmd = 'ffmpeg -version'; exec(cmd, function(error, stdout, stderr) { console.log(stdout); callback(null, stdout); }); }
I test the function in lambda and it outputs nothing. Wondering how to include the ffmpeg library with AWS and node js. Any help is greatly appreciated.
raise NeedDownloadError('Need ffmpeg exe. ' NeedDownloadError: Need ffmpeg exe)
I'm trying to execute a call to an unofficial Instagram API python library, after several errors for dependencies needed fixed I'm stuck at this one
File "C:\Users\Pablo\Desktop\txts_pys_phps_programacion\Instagram-API-python-master\InstagramAPI.py", line 15, in from moviepy.editor import VideoFileClip File "C:\Python27\lib\site-packages\moviepy\editor.py", line 22, in from .video.io.VideoFileClip import VideoFileClip File "C:\Python27\lib\site-packages\moviepy\video\io\VideoFileClip.py", line 3, in from moviepy.video.VideoClip import VideoClip File "C:\Python27\lib\site-packages\moviepy\video\VideoClip.py", line 20, in from .io.ffmpeg_writer import ffmpeg_write_image, ffmpeg_write_video File "C:\Python27\lib\site-packages\moviepy\video\io\ffmpeg_writer.py", line 15, in from moviepy.config import get_setting File "C:\Python27\lib\site-packages\moviepy\config.py", line 38, in FFMPEG_BINARY = get_exe() File "C:\Python27\lib\site-packages\imageio\plugins\ffmpeg.py", line 86, in get_exe raise NeedDownloadError('Need ffmpeg exe. '
NeedDownloadError: Need ffmpeg exe. You can download it by calling: imageio.plugins.ffmpeg.download()
FFMPEG: only get audio compressed frames / packets. command line
I would like a command like for FFMPEG that will create a file which contains only uncompressed audio frames/packets from a file.
Example:
ffmpeg -i in.mp3 --no-decompress out
Something like this would be fantastic.
Creating HLS variants with FFMPEG
I am starting with a high res video file and I would like to create 3 variants, low quality, mid quality, and high quality for mobile streaming. I want these mid/low/high variants to be segmented into ts pieces that the m3u8 file will be pointing that. Is there a way to do this in one line in ffmpeg?
I have successfully generated an m3u8 file and ts segments with ffmpeg, do I need to do this 3x and set specs for low/mid/high? If so, how do I get the singular m3u8 file to point to all variants as opposed to one for each variant?
This is the command I used to generate the m3u8 file along with the ts segments.
ffmpeg -i C:\Users\george\Desktop\video\hos.mp4 -strict -2 -acodec aac -vcodec libx264 -crf 25 C:\Users\user\Desktop\video\hos_Phone.m3u8
Audacity FFMPG exporting partial file
New Problem Audacity with new FFMPEG, Export 8 channels to 7.1 and it cuts out at 23 minutes. its 2 hour long tracks. It goes to 1.x near the end, plenty of disk space, not sure what is wrong with it.
ffmpeg -i - -strict experimental -c:a aac -b:a 240k "F:\Something.aac" ffmpeg version N-82966-g6993bb4 Copyright (c) 2000-2016 the FFmpeg developers built with gcc 5.4.0 (GCC) configuration: --disable-static --enable-shared --enable-gpl --enable-version3 --enable-dxva2 --enable-libmfx --enable-nvenc --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-decklink --enable-zlib libavutil 55. 43.100 / 55. 43.100 libavcodec 57. 70.100 / 57. 70.100 libavformat 57. 61.100 / 57. 61.100 libavdevice 57. 2.100 / 57. 2.100 libavfilter 6. 68.100 / 6. 68.100 libswscale 4. 3.101 / 4. 3.101 libswresample 2. 4.100 / 2. 4.100 libpostproc 54. 2.100 / 54. 2.100
Guessed Channel Layout for Input Stream #0.0 : 7.1
Input #0, wav, from 'pipe:': Duration: N/A, bitrate: 5644 kb/s Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, 7.1, s16, 5644 kb/s
Output #0, adts, to 'F:\Something.aac': Metadata: encoder : Lavf57.61.100 Stream #0:0: Audio: aac (LC), 44100 Hz, 7.1, fltp, 240 kb/s Metadata: encoder : Lavc57.70.100 aac
Stream mapping: Stream #0:0 -> #0:0 (pcm_s16le (native) -> aac (native))
size= 63kB time=00:00:02.18 bitrate= 237.6kbits/s speed= 4.3x size= 117kB time=00:00:03.99 bitrate= 240.0kbits/s speed=3.96x size= 145kB time=00:00:04.94 bitrate= 240.3kbits/s speed= 2.4x size= 193kB time=00:00:06.57 bitrate= 240.8kbits/s speed=2.57x size= 249kB time=00:00:08.45 bitrate= 241.1kbits/s speed=2.76x size= 293kB time=00:00:09.96 bitrate= 241.3kbits/s speed=2.45x size= 353kB time=00:00:11.98 bitrate= 241.5kbits/s speed=2.62x size= 386kB time=00:00:13.09 bitrate= 241.6kbits/s speed=2.58x size= 441kB time=00:00:14.95 bitrate= 241.7kbits/s speed=2.42x size= 506kB time=00:00:17.13 bitrate= 241.7kbits/s speed=2.57x size= 575kB time=00:00:19.48 bitrate= 241.8kbits/s speed=2.72x size= 589kB time=00:00:19.94 bitrate= 241.9kbits/s speed=2.53x size= 655kB time=00:00:22.17 bitrate= 241.9kbits/s speed=2.65x size= 736kB time=00:00:24.91 bitrate= 241.9kbits/s speed=2.81x size= 737kB time=00:00:24.96 bitrate= 241.9kbits/s speed=2.63x size= 809kB time=00:00:27.39 bitrate= 242.0kbits/s speed=2.74x size= 885kB time=00:00:29.95 bitrate= 242.0kbits/s speed=2.71x size= 31922kB time=00:17:56.36 bitrate= 243.0kbits/s speed=3.15x size= 31991kB time=00:17:58.70 bitrate= 243.0kbits/s speed=3.15x size= 32029kB time=00:17:59.96 bitrate= 243.0kbits/s speed=3.14x size= 32053kB time=00:18:00.77 bitrate= 243.0kbits/s speed=3.14x size= 32084kB time=00:18:01.84 bitrate= 243.0kbits/s speed=3.14x size= 32116kB time=00:18:02.88 bitrate= 243.0kbits/s speed=3.14x size= 32144kB time=00:18:03.83 bitrate= 243.0kbits/s speed=3.14x size= 32177kB time=00:18:04.95 bitrate= 243.0kbits/s speed=3.13x size= 32248kB time=00:18:07.34 bitrate= 243.0kbits/s speed=3.13x .... Repeat of much of the same 3.0x throughout
size= 40419kB time=00:22:42.89 bitrate= 242.9kbits/s speed=3.03x size= 40449kB time=00:22:43.91 bitrate= 242.9kbits/s speed=3.03x size= 40480kB time=00:22:44.96 bitrate= 242.9kbits/s speed=3.03x size= 40557kB time=00:22:47.56 bitrate= 242.9kbits/s speed=3.03x size= 40628kB time=00:22:49.95 bitrate= 242.9kbits/s speed=3.03x size= 40719kB time=00:22:53.01 bitrate= 242.9kbits/s speed=3.03x size= 40751kB time=00:22:54.11 bitrate= 242.9kbits/s speed=1.06x size= 40753kB time=00:22:54.15 bitrate= 242.9kbits/s speed=1.06x video:0kB audio:40348kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 1.002666%
[aac @ 00000000026d5aa0] Qavg: 331.324
ffmpeg - whatsapp: video format not supported
I have two videos (.mp4) files. One uploads to whatsapp and another does not.
using ffmpeg i checked their properties:
a) Properties of video which uploads:
Duration: 00:00:56.45, start: 0.148000, bitrate: 1404 kb/s Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1080x1080, 1359 kb/s, 23.98 fps, 23.98 tbr, 90k tbn, 47.95 tbc (default) Metadata: handler_name : VideoHandler Stream #0:1(eng): Audio: aac (HE-AACv2) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 47 kb/s (default) Metadata: handler_name : SoundHandler
At least one output file must be specified
b) video which does not upload to whatsapp (because its says format not supported)
Duration: 00:00:56.10, start: 0.000000, bitrate: 543 kb/s Stream #0:0: Video: h264 (High) (H264 / 0x34363248), yuv420p, 1080x1080 [SAR 1:1 DAR 1:1], 464 kb/s, 23.98 fps, 23.98 tbr, 23.98 tbn, 47.95 tbc Stream #0:1: Audio: aac (LC) ([255][0][0][0] / 0x00FF), 48000 Hz, stereo, fltp, 56 kb/s
The difference in video I noticed:
(avc1 / 0x31637661)
vs (H264 / 0x34363248)
1359 kb/s
vs 464 kb/s
90k tbn
vs 23.98 tbn
What can be the reason
Also the second video is not being played in Android.
The link for the video is
https://drive.google.com/open?id=0B4UM6vTHw4pyMExQQ1lxZGp0N2c
-- https://drive.google.com/open?id=0B4UM6vTHw4pyMExQQ1lxZGp0N2cavcodec/mjpegdec: Check for rgb before flipping
avcodec/mjpegdec: Check for rgb before flipping Fixes assertion failure due to unsupported case Fixes: 356/fuzz-1-ffmpeg_VIDEO_AV_CODEC_ID_MJPEG_fuzzer Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/targets/ffmpeg Signed-off-by: Michael Niedermayer