avformat/mxfenc: Bump minor versions for S377-1-2009 Signed-off-by: Michael Niedermayer
avformat/mxfenc: Bump minor versions for S377-1-2009
FFMpeg default bitrate value
What does FFMpeg do if I specify one codec to reencode and omit bitrate parameter? I tested with one video but I would like to understand
Original:
Duration: 00:00:10.48, start: 0.000000, bitrate: 17282 kb/s
then I ran
ffmpeg.exe -i a.mp4 -c:v h264 c.mp4
Result:
Duration: 00:00:10.50, start: 0.000000, bitrate: 4467 kb/s
Where did it get 4467 from? is it a standard value for any video or it depends on something?
How do you convert an entire directory with ffmpeg?
How do you convert an entire directory/folder with ffmpeg via command line or with a batch script?
ffmpeg output in Audacity
I wanted to split the audio track of a mp4 file each receiving different filter then merge to an output mp4 file. Please note I do not wanted series filter but rather parallel filter and then merge.
I came up with the following command.
ffmpeg -i input.mp4 -filter_complex "[0:a]asplit[audio1][audio2];[audio1]highpass=f=200:p=1:t=h:w=50;[audio2]lowpass=f=700:p=1:t=h:w=200;[audio1][audio2]amerge=inputs=2[out]" -map "[out]" -map 0:v -c:v copy -map 0:s? -c:s copy -ac 2 -y output.mp4
this output will play on Vlc and mpv. However, when I try to open it in Audacity, I get:
Why I do get this? I assume the index[05] is the correct audio output.
This raise the question, which audio track is playing when opened in Vlc? How can I create an output which has only one final audio track?

how to compile ffmpeg with mingw-w64 toolchain in cygwin environment
I want to compile ffmpeg project with mingw-w64 compiler in cygwin environment. I took the following steps:
installed cygwin and mingw-w64 package with the setup-x86_64.exe and git clone the ffmpeg project from https://git.ffmpeg.org/ffmpeg.git.
entered into ffmpeg folder, executed the folloing commands.
./configure --host-os=x86_64-w64-mingw32 --enable-shared --disable-static
make
I found the make command didn't invoke the compiler from mingw-w64, instead it invoked the compiler from cygwin, I have tried the command "make CC=x86_64-w64-mingw32-gcc", but it failed with some errors about missing header files as follows:
fatal error: sys/ioctl.h: No such file or directory
I think these header files have been installed. which command is correct? direct invocation of command make or the command with CC option? I want to use the compiler from mingw-w64 to compile ffmpeg, how can I achieve my goal?
-- https://git.ffmpeg.org/ffmpeg.gitStitching videos side by side using java [on hold]
how to stitch two videos side by side and make one video like in smule app using java. Any libraries/library wrappers for this ? Please help.
ffprobe: fix SEGV when new streams are added
Error: No input specified at FfmpegCommand.proto.setStartTime.proto.seekInput ffmpeg
Error: No input specified at FfmpegCommand.proto.setStartTime.proto.seekInput (/Users/omkar/Desktop/whatsappvideo/node_modules/fluent-ffmpeg/lib/options/inputs.js:147:13) at process_video (/Users/omkar/Desktop/whatsappvideo/app.js:25:10) at /Users/omkar/Desktop/whatsappvideo/app.js:15:7 at Layer.handle [as handle_request] (/Users/omkar/Desktop/whatsappvideo/node_modules/express/lib/router/layer.js:95:5) at next (/Users/omkar/Desktop/whatsappvideo/node_modules/express/lib/router/route.js:137:13) at Route.dispatch (/Users/omkar/Desktop/whatsappvideo/node_modules/express/lib/router/route.js:112:3) at Layer.handle [as handle_request] (/Users/omkar/Desktop/whatsappvideo/node_modules/express/lib/router/layer.js:95:5) at /Users/omkar/Desktop/whatsappvideo/node_modules/express/lib/router/index.js:281:22 at Function.process_params (/Users/omkar/Desktop/whatsappvideo/node_modules/express/lib/router/index.js:335:12) at Busboy.next (/Users/omkar/Desktop/whatsappvideo/node_modules/express/lib/router/index.js:275:10)
Code :- app.js
var express = require("express"), app = express(), http = require("http").Server(app).listen(8080), upload = require("express-fileupload");
var video=null;
app.use(upload()) console.log("Server Started!"); app.get("/",function(req,res){ res.sendFile(__dirname+"/index.html"); }) app.post("/",function(req,res){ if(req.files){ video = req.files; process_video(req.files.upfile.data); //console.log(req.files.upfile.data); } }) function process_video(video){ var ffmpeg = require('fluent-ffmpeg'); ffmpeg(video) .setStartTime(120) .seekInput(0) .setDuration(10) .output('test.mp4') .on('start', function(commandLine) { console.log('start : ' + commandLine); }) .on('progress', function(progress) { console.log('In Progress !!' + Date()); }) .on('end', function(err) { if(!err) { console.log('conversion Done'); } }) .on('error', function(err){ console.log('error: ', +err); }).run(); }
wowza + live + ffmpeg + hls player, how to create the playlist.m3u8?
I'm trying to setup a wowza live test server and then I can play hls from my mobile app. It do work without any problem for vod. I can play it in my app. I can also see the .m3p8 file if I enter this uri in the browser. I tried to do the same in live mode (my goal is to test some streaming parameters for live streaming). I tried to use ffmpeg to create the live stream:
ffmpeg -re -i "myInputTestVideo.mp4" -vcodec libx264 -vb 150000 -g 60 -vprofile baseline -level 2.1 -acodec aac -ab 64000 -ar 48000 -ac 2 -vbsf h264_mp4toannexb -strict experimental -f mpegts udp://127.0.0.1:10000
I created a "source file" and connected it to the "Incoming Streams". I can see in my application's Monitoring / Network tab that it do getting the data from ffmpeg.
My problem is how to get the playlist.m3p8 file so I can play it from inside my app (hls based)?
Again, for now I need a way to test playing with the streaming settings and in real live I'll have a real live streaming source.
Revision 110154: r6b275f8 n'avait pas de changement de z, c'est mal
Generate thumbnail for inmemory uploaded video file
The client app uploaded a video file and i need to generate a thumbnail and dump it to AWS s3 and return the client the link to the thumbnail. I searched around and found ffmpeg fit for the purpose. The following was the code i could come up with:
from ffmpy import FFmpeg
import tempfile def generate_thumbnails(file_name): output_file = tempfile.NamedTemporaryFile(suffix='.jpg', delete=False, prefix=file_name) output_file_path = output_file.name try: # generate the thumbnail using the first frame of the video ff = FFmpeg(inputs={file_name: None}, outputs={output_file_path: ['-ss', '00:00:1', '-vframes', '1']}) ff.run() # upload generated thumbnail to s3 logic # return uploaded s3 path except: error = traceback.format_exc() write_error_log(error) finally: os.remove(output_file_path) return ''
I was using django and was greeted with permission error for the above. I found out later than ffmpeg requires the file to be on the disk and doesn't just take into account the InMemory uploaded file (I may be wrong as i assumed this).
Is there a way to read in memory video file likes normal ones using ffmpeg or should i use StringIO and dump it onto a temp. file? I prefer not to do the above as it is an overhead.
Any alternative solution with a better benchmark also would be appreciated.
Thanks.
Update: To save the inmemory uploaded file to disk: How to copy InMemoryUploadedFile object to disk
FFMPEG Video Recording in Android getting overlay of Green latches
I have used, FFMPEG & OpenCV for integrating the Video Player into Android Application.
Build Gradle:-
compile('org.bytedeco:javacv-platform:1.4') { exclude group: 'org.bytedeco.javacpp-presets'
}
compile group: 'org.bytedeco.javacpp-presets', name: 'opencv', version: '3.4.0-1.4'
compile group: 'org.bytedeco.javacpp-presets', name: 'ffmpeg', version: '3.4.1-1.4'
compile files('libs/ffmpeg-android-arm.jar')
compile files('libs/ffmpeg-android-x86.jar')
compile files('libs/opencv-android-arm.jar')
compile files('libs/opencv-android-x86.jar')
I have included 'jniLibs' in the 'main' folder with 'armeabi,amreabi-v7a, x86' folder's.
I am able to open Camera and record the video.
The O/P of the video is not coming as expected, audio quality is fine. Please see the Image below.
The code I used for integration: https://github.com/CrazyOrr/FFmpegRecorder
Thanks in advance!!
--
net.ypresto.qtfaststartjava not supporting in maven java
I am using Qtfaststart to stream mp4 video(place MOOV atom to first) faster. I got maven repository from the link https://javalibs.com/artifact/net.ypresto.qtfaststartjava/qtfaststart .
net.ypresto.qtfaststartjava qtfaststart 0.1.0
I am getting Missing artifact net.ypresto.qtfaststartjava:qtfaststart:jar:0.1.0
error in the pom.xml
This dependency is not supporting, Can anyone help me how to solve this OR any other libraries to place MOOV atom like Qtfaststart.
-- https://javalibs.com/artifact/net.ypresto.qtfaststartjava/qtfaststartConcatenate audio with image and video using ffmpeg
I have 1 image, 1 audio file and 1 video. I would like to merge all of them to make a video which will
- show the image and play audio file for the first 10s
- play the video file
here is what I was trying to do so far.
ffmpeg \
-loop 1 -framerate 24 -t 10 -i item1.jpg \
-i "https://audio-ssl.itunes.apple.com/apple-assets-us-std-000001/Music/66/58/f7/mzi.eoocfriy.aac.p.m4a" \
-i item4.mp4 \
-filter_complex \
"[0]scale=432:432,setdar=1[img1]; \ [1]volume=1[aud1]; \ [2]scale=432:432,setdar=1[vid1]; \ [img1][aud1][vid1] concat=n=3:v=1:a=1" \
outputfile.mp4
I got the error:
[Parsed_setdar_4 @ 0x3063780] Media type mismatch between the 'Parsed_setdar_4' filter output pad 0 (video) and the 'Parsed_concat_6' filter input pad 1 (audio) [AVFilterGraph @ 0x30479a0] Cannot create the link setdar:0 -> concat:1 Error initializing complex filters. Invalid argument
I tried to googled but still cannot figure out what I am doing wrong?
Updated: I ran the following command:
ffmpeg \
-loop 1 -framerate 24 -t 10 -i item1.jpg \
-t 10 -i "https://audio-ssl.itunes.apple.com/apple-assets-us-std-000001/Music/66/58/f7/mzi.eoocfriy.aac.p.m4a" \
-i item4.mp4 \
-f lavfi -t 1 -i anullsrc \
-filter_complex \
"[0]scale=432:432,setsar=1[img1]; \
[2]scale=432:432,setsar=1[vid1]; \ [img1][1][vid1][3] concat=n=2:v=1:a=1" \
outputfile.mp4
and got the following error:
ffmpeg version 3.3.3 Copyright (c) 2000-2017 the FFmpeg developers built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04.3) configuration: --extra-libs=-ldl --prefix=/opt/ffmpeg --mandir=/usr/share/man --enable-avresample --disable-debug --enable-nonfree --enable-gpl --enable-version3 --enable-libopencore-amrnb --enable-libopencore-amrwb --disable-decoder=amrnb --disable-decoder=amrwb --enable-libpulse --enable-libfreetype --enable-gnutls --disable-ffserver --enable-libx264 --enable-libx265 --enable-libfdk-aac --enable-libvorbis --enable-libtheora --enable-libmp3lame --enable-libopus --enable-libvpx --enable-libspeex --enable-libass --enable-avisynth --enable-libsoxr --enable-libxvid --enable-libvidstab --enable-libwavpack --enable-nvenc --enable-libzimg libavutil 55. 58.100 / 55. 58.100 libavcodec 57. 89.100 / 57. 89.100 libavformat 57. 71.100 / 57. 71.100 libavdevice 57. 6.100 / 57. 6.100 libavfilter 6. 82.100 / 6. 82.100 libavresample 3. 5. 0 / 3. 5. 0 libswscale 4. 6.100 / 4. 6.100 libswresample 2. 7.100 / 2. 7.100 libpostproc 54. 5.100 / 54. 5.100
Input #0, image2, from 'item1.jpg': Duration: 00:00:00.04, start: 0.000000, bitrate: 8365 kb/s Stream #0:0: Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 432x432 [SAR 1:1 DAR 1:1], 24 fps, 24 tbr, 24 tbn, 24 tbc
Input #1, mov,mp4,m4a,3gp,3g2,mj2, from 'https://audio-ssl.itunes.apple.com/apple-assets-us-std-000001/Music/66/58/f7/mzi.eoocfriy.aac.p.m4a': Metadata: major_brand : M4A minor_version : 0 compatible_brands: M4A mp42isom creation_time : 1983-06-16T23:20:44.000000Z iTunSMPB : 00000000 00000840 00000000 00000000001423C0 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 Duration: 00:00:29.98, start: 0.047891, bitrate: 285 kb/s Stream #1:0(eng): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 271 kb/s (default) Metadata: creation_time : 1983-06-16T23:20:44.000000Z
Input #2, mov,mp4,m4a,3gp,3g2,mj2, from 'item4.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2avc1mp41 creation_time : 1970-01-01T00:00:00.000000Z encoder : Lavf53.24.2 Duration: 00:00:13.70, start: 0.000000, bitrate: 615 kb/s Stream #2:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 320x240 [SAR 1:1 DAR 4:3], 229 kb/s, 15 fps, 15 tbr, 15360 tbn, 30 tbc (default) Metadata: creation_time : 1970-01-01T00:00:00.000000Z handler_name : VideoHandler Stream #2:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, 5.1, fltp, 382 kb/s (default) Metadata: creation_time : 1970-01-01T00:00:00.000000Z handler_name : SoundHandler
Input #3, lavfi, from 'anullsrc': Duration: N/A, start: 0.000000, bitrate: 705 kb/s Stream #3:0: Audio: pcm_u8, 44100 Hz, stereo, u8, 705 kb/s
[AVFilterGraph @ 0x3955e20] No such filter: ' '
Error initializing complex filters.
Invalid argument
How can I record an audio from USB audio device using C++ with ffmpeg on Linux?
How can I record an audio from USB audio device using C++ with ffmpeg on Linux? I have the following code. But I have no idea how to set the parameter of 'url' in the function of avformat_open_input. Could assistance be provided by anybody? Much appreciated.
27 av_register_all(); 28 avdevice_register_all(); 29 30 //pAudioInputFmt =av_find_input_format("dshow"); 31 pAudioInputFmt =av_find_input_format("alsa"); 32 //assert(pAudioInputFmt != NULL); 33 if (!(pAudioInputFmt != NULL)) 34 { 35 printf("Error %s %d\n", __FILE__, __LINE__); 36 char ch = cin.get(); 37 cout <<"ch = "<< ch << endl; 38 return (-1); 39 } 40 41 // I have no idear how to set the second parameter on Linux. 42 if (!(avformat_open_input(&pFmtCtx, "=Device)", pAudioInputFmt,NULL) == 0)) 43 { 44 printf("Error %s %d\n", __FILE__, __LINE__); 45 system("pause"); 46 return (-1); 47 }
Revision 110156: Ajout option « Classes CSS du lien » sur les liens vers un squelette de ...
avcodec/cbs_h2645: use AVBufferRef to store list of active parameter sets
avcodec/cbs_h2645: use AVBufferRef to store list of active parameter sets Removes unnecessary data copies, and partially fixes potential issues with dangling references held in said lists. Reviewed-by: Mark ThompsonSigned-off-by: James Almer
Twitter gives error on ffmpeg generated video “The media you tried to upload was invalid.”
ffmpeg generated video gives error while sharing on twitter. the error is :
“Cannot read property ‘code' of undefined”
I am generating video from audio. my sample code Example is:
ffmpeg -i audio.webm -i image.png -vcodec libx264 -pix_fmt yuv420p -strict -2 -acodec aac video.mp4
I am directly trying to upload generated video to twitter website and video size is just 6 seconds.
using ffmpeg to replace a single frame based on timestamp
Is it possible to CLI ffmpeg to replace a specific frame at a specified interval with another image? I know how to extract all frames from a video, and re-stitch as another video, but I am looking to avoid this process, if possible.
My goal:
- Given a video file input.mp4
- Given a PNG file, image.png and given its known to occur at exactly a specific timestamp within input.mp4
- create out.mp4 with image.png replacing that position of input.mp4
gstreamer : Internal data error, in appsink "pull-sample" mode
I am getting Internal data error, in appsink . My application is to read .yuv data , encode and write to a buffer.
I have accomplished the writing it file but when i changed the code to write it buffer it giving error. Its only able to write only single packet (188bytes).
Output of program:
(ConsoleApplication6.exe:14432): GStreamer-WARNING **: Failed to load plugin 'C:\gstreamer\1.0\x86_64\lib\gstreamer-1.0\libgstopenh264.dll': 'C:\gstreamer\1.0\x86_64\lib\gstreamer-1.0\libgstopenh264.dll': The specified procedure could not be found. pipeline: filesrc location=Transformers1080p.yuv blocksize=4147200 ! videoparse width=1920 height=1080 framerate=60/1 ! videoconvert ! video/x-raw,format=I420,width=1920,height=1080,framerate=60/1 ! x264enc ! mpegtsmux ! queue ! appsink name = sink Now playing: Transformers1080p.yuv Running... on_new_sample_from_sink sample got of size = 188 Error: Internal data stream error. Returned, stopping playback Deleting pipeline
my code:
#define _CRT_SECURE_NO_WARNINGS 1
//#pragma warning(disable:4996)
#include
#include
#include
#include
#include
#include
#include
#include #include
#include #include
#include using namespace std; GstElement *SinkBuff;
char *out_file_path;
FILE *out_file; //gst-launch-1.0.exe -v filesrc location=Transformers1080p.yuv blocksize=4147200 ! //videoconvert ! video/x-raw,format=I420,width=1920,height=1080,framerate=60/1 ! //openh264enc ! mpegtsmux ! filesink location=final.ts static gboolean bus_call(GstBus *bus, GstMessage *msg, gpointer data)
{ GMainLoop *loop = (GMainLoop *)data; switch (GST_MESSAGE_TYPE(msg)) { case GST_MESSAGE_EOS: g_print("End of stream\n"); g_main_loop_quit(loop); break; case GST_MESSAGE_ERROR: { gchar *debug; GError *error; gst_message_parse_error(msg, &error, &debug); g_free(debug); g_printerr("Error: %s\n", error->message); g_error_free(error); g_main_loop_quit(loop); break; } default: break; } return TRUE;
} /* called when the appsink notifies us that there is a new buffer ready for
* processing */
static void on_new_sample_from_sink(GstElement * elt, void *ptr)
{ guint size; GstBuffer *app_buffer, *buffer; GstElement *source; GstMapInfo map = { 0 }; GstSample *sample; static GstClockTime timestamp = 0; printf("\n on_new_sample_from_sink \n "); /* get the buffer from appsink */ g_signal_emit_by_name(SinkBuff, "pull-sample", &sample, NULL); if (sample) { buffer = gst_sample_get_buffer(sample); gst_buffer_map(buffer, &map, GST_MAP_READ); printf("\n sample got of size = %d \n", map.size); //Buffer fwrite((char *)map.data, 1, sizeof(map.size), out_file); gst_buffer_unmap(buffer, &map); gst_sample_unref(sample); }
} int main(int argc, char *argv[])
{ GMainLoop *loop; int width, height; GstElement *pipeline; GError *error = NULL; GstBus *bus; char pipeline_desc[1024]; out_file = fopen("output.ts", "wb"); /* Initialisation */ gst_init(&argc, &argv); // Create gstreamer loop loop = g_main_loop_new(NULL, FALSE); sprintf( pipeline_desc, " filesrc location=Transformers1080p.yuv blocksize=4147200 !"" videoparse width=1920 height=1080 framerate=60/1 !"" videoconvert ! video/x-raw,format=I420,width=1920,height=1080,framerate=60/1 ! " //" x264enc ! mpegtsmux ! filesink location=final.ts"); " x264enc ! mpegtsmux ! queue ! appsink name = sink"); printf("pipeline: %s\n", pipeline_desc); /* Create gstreamer elements */ pipeline = gst_parse_launch(pipeline_desc, &error); /* TODO: Handle recoverable errors. */ if (!pipeline) { g_printerr("Pipeline could not be created. Exiting.\n"); return -1; } /* get sink */ SinkBuff = gst_bin_get_by_name(GST_BIN(pipeline), "sink"); g_object_set(G_OBJECT(SinkBuff), "emit-signals", TRUE, "sync", FALSE, NULL); g_signal_connect(SinkBuff, "new-sample", G_CALLBACK(on_new_sample_from_sink), NULL); /* Set up the pipeline */ /* we add a message handler */ bus = gst_pipeline_get_bus(GST_PIPELINE(pipeline)); gst_bus_add_watch(bus, bus_call, loop); gst_object_unref(bus); /* Set the pipeline to "playing" state*/ g_print("Now playing: Transformers1080p.yuv \n"); gst_element_set_state(pipeline, GST_STATE_PLAYING); /* Iterate */ g_print("Running...\n"); g_main_loop_run(loop); /* Out of the main loop, clean up nicely */ g_print("Returned, stopping playback\n"); gst_element_set_state(pipeline, GST_STATE_NULL); g_print("Deleting pipeline\n"); gst_object_unref(GST_OBJECT(pipeline)); fclose(out_file); g_main_loop_unref(loop); return 0;
}