lavfi/xbr: update filter url
FFmpeg - feed raw frames via pipe - FFmpeg does not detect pipe closure
Im trying to follow these examples from C++ in Windows. Phyton ExampleC# Example
I have an application that produces raw frames that shall be encoded with FFmpeg. The raw frames are transfered via IPC pipe to FFmpegs STDIN. That is working as expected, FFmpeg even displays the number of frames currently available.
The problem occours when we are done sending frames. When I close the write end of the pipe I would expect FFmpeg to detect that, finish up and output the video. But that does not happen. FFmpeg stays open and seems to wait for more data.
I made a small test project in VisualStudio.
#include "stdafx.h" //// stdafx.h //#include "targetver.h"
//#include
//#include
//#include #include "Windows.h"
#include using namespace std; bool WritePipe(void* WritePipe, const UINT8 *const Buffer, const UINT32 Length)
{ if (WritePipe == nullptr || Buffer == nullptr || Length == 0) { cout << __FUNCTION__ <<": Some input is useless"; return false; } // Write to pipe UINT32 BytesWritten = 0; UINT8 newline = '\n'; bool bIsWritten = WriteFile(WritePipe, Buffer, Length, (::DWORD*)&BytesWritten, nullptr); cout << __FUNCTION__ <<" Bytes written to pipe "<< BytesWritten << endl; //bIsWritten = WriteFile(WritePipe, &newline, 1, (::DWORD*)&BytesWritten, nullptr); // Do we need this? Actually this should destroy the image. FlushFileBuffers(WritePipe); // Do we need this? return bIsWritten;
} #define PIXEL 80 // must be multiple of 8. Otherwise we get warning: Bytes are not aligned int main()
{ HANDLE PipeWriteEnd = nullptr; HANDLE PipeReadEnd = nullptr; { // create us a pipe for inter process communication SECURITY_ATTRIBUTES Attr = { sizeof(SECURITY_ATTRIBUTES), NULL, true }; if (!CreatePipe(&PipeReadEnd, &PipeWriteEnd, &Attr, 0)) { cout <<"Could not create pipes"<< ::GetLastError() << endl; system("Pause"); return 0; } } // Setup the variables needed for CreateProcess // initialize process attributes SECURITY_ATTRIBUTES Attr; Attr.nLength = sizeof(SECURITY_ATTRIBUTES); Attr.lpSecurityDescriptor = NULL; Attr.bInheritHandle = true; // initialize process creation flags UINT32 CreateFlags = NORMAL_PRIORITY_CLASS; CreateFlags |= CREATE_NEW_CONSOLE; // initialize window flags UINT32 dwFlags = 0; UINT16 ShowWindowFlags = SW_HIDE; if (PipeWriteEnd != nullptr || PipeReadEnd != nullptr) { dwFlags |= STARTF_USESTDHANDLES; } // initialize startup info STARTUPINFOA StartupInfo = { sizeof(STARTUPINFO), NULL, NULL, NULL, (::DWORD)CW_USEDEFAULT, (::DWORD)CW_USEDEFAULT, (::DWORD)CW_USEDEFAULT, (::DWORD)CW_USEDEFAULT, (::DWORD)0, (::DWORD)0, (::DWORD)0, (::DWORD)dwFlags, ShowWindowFlags, 0, NULL, HANDLE(PipeReadEnd), HANDLE(nullptr), HANDLE(nullptr) }; LPSTR ffmpegURL = "\"PATHTOFFMPEGEXE\" -y -loglevel verbose -f rawvideo -vcodec rawvideo -framerate 1 -video_size 80x80 -pixel_format rgb24 -i - -vcodec mjpeg -framerate 1/4 -an \"OUTPUTDIRECTORY\""; // Finally create the process PROCESS_INFORMATION ProcInfo; if (!CreateProcessA(NULL, ffmpegURL, &Attr, &Attr, true, (::DWORD)CreateFlags, NULL, NULL, &StartupInfo, &ProcInfo)) { cout <<"CreateProcess failed "<< ::GetLastError() << endl; } //CloseHandle(ProcInfo.hThread); // Create images and write to pipe
#define MYARRAYSIZE (PIXEL*PIXEL*3) // each pixel has 3 bytes UINT8* Bitmap = new UINT8[MYARRAYSIZE]; for (INT32 outerLoopIndex = 9; outerLoopIndex >= 0; --outerLoopIndex) // frame loop { for (INT32 innerLoopIndex = MYARRAYSIZE - 1; innerLoopIndex >= 0; --innerLoopIndex) // create the pixels for each frame { Bitmap[innerLoopIndex] = (UINT8)(outerLoopIndex * 20); // some gray color } system("pause"); if (!WritePipe(PipeWriteEnd, Bitmap, MYARRAYSIZE)) { cout <<"Failed writing to pipe"<< endl; } } // Done sending images. Tell the other process. IS THIS NEEDED? HOW TO TELL FFmpeg WE ARE DONE? //UINT8 endOfFile = 0xFF; // EOF = -1 == 1111 1111 for uint8 //if (!WritePipe(PipeWriteEnd, &endOfFile, 1)) //{ // cout <<"Failed writing to pipe"<< endl; //} //FlushFileBuffers(PipeReadEnd); // Do we need this? delete Bitmap; system("pause"); // clean stuff up FlushFileBuffers(PipeWriteEnd); // Do we need this? if (PipeWriteEnd != NULL && PipeWriteEnd != INVALID_HANDLE_VALUE) { CloseHandle(PipeWriteEnd); } // We do not want to destroy the read end of the pipe? Should not as that belongs to FFmpeg //if (PipeReadEnd != NULL && PipeReadEnd != INVALID_HANDLE_VALUE) //{ // ::CloseHandle(PipeReadEnd); //} return 0; }
And here the output of FFmpeg
ffmpeg version 3.4.1 Copyright (c) 2000-2017 the FFmpeg developers built with gcc 7.2.0 (GCC) configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-bzlib --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-cuda --enable-cuvid --enable-d3d11va --enable-nvenc --enable-dxva2 --enable-avisynth --enable-libmfx libavutil 55. 78.100 / 55. 78.100 libavcodec 57.107.100 / 57.107.100 libavformat 57. 83.100 / 57. 83.100 libavdevice 57. 10.100 / 57. 10.100 libavfilter 6.107.100 / 6.107.100 libswscale 4. 8.100 / 4. 8.100 libswresample 2. 9.100 / 2. 9.100 libpostproc 54. 7.100 / 54. 7.100
[rawvideo @ 00000221ff992120] max_analyze_duration 5000000 reached at 5000000 microseconds st:0
Input #0, rawvideo, from 'pipe:': Duration: N/A, start: 0.000000, bitrate: 153 kb/s Stream #0:0: Video: rawvideo, 1 reference frame (RGB[24] / 0x18424752), rgb24, 80x80, 153 kb/s, 1 fps, 1 tbr, 1 tbn, 1 tbc
Stream mapping: Stream #0:0 -> #0:0 (rawvideo (native) -> mjpeg (native))
[graph 0 input from stream 0:0 @ 00000221ff999c20] w:80 h:80 pixfmt:rgb24 tb:1/1 fr:1/1 sar:0/1 sws_param:flags=2
[auto_scaler_0 @ 00000221ffa071a0] w:iw h:ih flags:'bicubic' interl:0
[format @ 00000221ffa04e20] auto-inserting filter 'auto_scaler_0' between the filter 'Parsed_null_0' and the filter 'format'
[swscaler @ 00000221ffa0a780] deprecated pixel format used, make sure you did set range correctly
[auto_scaler_0 @ 00000221ffa071a0] w:80 h:80 fmt:rgb24 sar:0/1 -> w:80 h:80 fmt:yuvj444p sar:0/1 flags:0x4
Output #0, mp4, to 'c:/users/vr3/Documents/Guenni/sometest.mp4': Metadata: encoder : Lavf57.83.100 Stream #0:0: Video: mjpeg, 1 reference frame (mp4v / 0x7634706D), yuvj444p(pc), 80x80, q=2-31, 200 kb/s, 1 fps, 16384 tbn, 1 tbc Metadata: encoder : Lavc57.107.100 mjpeg Side data: cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1
frame= 10 fps=6.3 q=1.6 size= 0kB time=00:00:09.00 bitrate= 0.0kbits/s speed=5.63x
As you can see in the last line of te FFmpeg output, the images got trough. 10 frames are available. But after closing the pipe, FFmpeg does not close, still expecting input.
As the linked examples show, this should be a valid method.
Trying for a week now...
-- Phyton Example, C# Exampleffmpeg: recording audio + video into separate files, pressing 'q' or ctrl-C, audio is truncated
Trying to record from the camera and sound card at the same time. If I use the "-t" option with a fixed time, both streams come out fine. If I try to break out while it's recording, by pressing either 'q' or ctrl-C, the audio stream is cut typically 5 seconds short.
I've tried many many options, codecs, -presets, combinations of everything, changing the order of the streams, usually with no luck. the thing that works the "best" is -preset ultrafast -threads 0, however I find when using these options, the program doesn't exit cleanly and I can't find documentation for what "-threads 0" really means (even though its name seems obvious).
Here's the basic command:
ffmpeg -y -f alsa -i default -f v4l2 -framerate 25 -video_size 640x480 -i /dev/video0 -map 0:a out.wav -map 1:v out.mkv
Is it possible to stream multi framerate videos using MPEG-DASH?
I transcoded a mp4 video to several framerates like 5FPS, 10FPS .. 30FPS and used MP4Box to segment them to play in DASH IF player.
FFMPEG Command to generate multi framerate videos with same resolution:
ffmpeg -i fball.mp4 -f mp4 -vcodec libx264 -profile:v high -vf scale=1280:-1 -b:v 2000k -minrate 2000k -maxrate 2000k -bufsize 2000k -nal-hrd cbr -g 120 -keyint_min 120 -r 60.0 -flags +cgop -sc_threshold 0 -pix_fmt yuv420p -threads 0 -x264opts keyint=120:min-keyint=120:sps-id=1 -an -y fball_720p_60fps.mp4
ffmpeg -i fball.mp4 -f mp4 -vcodec libx264 -profile:v high -vf scale=1280:-1 -b:v 1000k -minrate 1000k -maxrate 1000k -bufsize 1000k -nal-hrd cbr -g 60 -keyint_min 60 -r 30.0 -flags +cgop -sc_threshold 0 -pix_fmt yuv420p -threads 0 -x264opts keyint=60:min-keyint=60:sps-id=1 -an -y fball_720p_30fps.mp4
FFMPEG command to extract audio:
ffmpeg -i fball.mp4 -acodec aac -b:a 128k -vn -strict -2 -y fball_audio.mp4
MP4Box command for segmentation:
MP4Box -frag 2000 -dash 2000 -rap -base-url ./segments/ -profile main -segment-name /segments/%s_ -out dash/fball_dash.mpd fball_720p_24fps.mp4 fball_720p_30fps.mp4 fball_720p_60fps.mp4 fball_audio.mp4
Segment Duration: 2 seconds
GOP length: segment duration x FPS of video
Resolution: 720p for all videos
Result is VIDEO DECODE error or stalls while switching framerate.
Am I making any mistake while transcoding? Is it possible to stream Multi frame rate videos using MPEG DASH?
Converting a call center recording to something useful
I have a call center recording (when played it sounds gibberish) for which the mediainfo shows info as
ion@aurora:~/Inbound$ mediainfo 48401-3405-48403--18042018170000.wav General
Complete name : 48401-3405-48403--18042018170000.wav
Format : Wave
File size : 327 KiB
Duration : 4mn 11s
Overall bit rate : 10.7 Kbps Audio
Format : G.723.1
Codec ID : A100
Duration : 4mn 11s
Bit rate : 10.7 Kbps
Channel(s) : 2 channels
Sampling rate : 8 000 Hz
Stream size : 327 KiB (100%)
The ffmpeg info shows this as
ion@aurora:~/Inbound$ ffmpeg -i 48401-3405-48403--18042018170000.wav
ffmpeg version N-91330-ga990184 Copyright (c) 2000-2018 the FFmpeg developers built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.9) 20160609 configuration: --prefix=/home/ion/ffmpeg_build --pkg-config-flags=--static --extra-cflags=-I/home/ion/ffmpeg_build/include --extra-ldflags=-L/home/ion/ffmpeg_build/lib --extra-libs='-lpthread -lm' --bindir=/home/ion/bin --enable-gpl --enable-libaom --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-nonfree libavutil 56. 18.102 / 56. 18.102 libavcodec 58. 20.103 / 58. 20.103 libavformat 58. 17.100 / 58. 17.100 libavdevice 58. 4.101 / 58. 4.101 libavfilter 7. 25.100 / 7. 25.100 libswscale 5. 2.100 / 5. 2.100 libswresample 3. 2.100 / 3. 2.100 libpostproc 55. 2.100 / 55. 2.100
Input #0, wav, from '48401-3405-48403--18042018170000.wav': Duration: 00:04:11.37, bitrate: 10 kb/s Stream #0:0: Audio: g723_1 ([0][161][0][0] / 0xA100), 8000 Hz, mono, s16, 10 kb/s
At least one output file must be specified
So I converted this file to PCM using
ffmpeg -acodec g723_1 -i 48401-3405-48403--18042018170000.wav -acodec pcm_s16le -f wav outnew1.wav
But the audio still sound gibberish , I tried many variation and only Goldwave worked but that works on windows and with GUI not cli.
So how can I convert this file to something useful so that atleast I can listen to it , It feels like a challenge now.
Audio file : https://drive.google.com/open?id=1T54lKaI6IJmOqTPNOA_OkYRz89EQ5F2L
PS : Use VLC to play audio file
-- https://drive.google.com/open?id=1T54lKaI6IJmOqTPNOA_OkYRz89EQ5F2Lffmpeg read the current segmentation file
I'm developing a system using ffmpeg to store some ip camera videos. i'm using the segmentation command for store each 5 minutes a video for camera. I have a wpf view where i can search historycal videos by dates. In this case i use the ffmpeg command concat to generate a video with the desire duration. All this work excelent, my question is: it's possible concatenate the current file of the segmentation? i need for example, make a serch from the X date to the current time, but the last file is not generated yet by the ffmpeg. when i concatenate the files, the last one is not showing because is not finish the segment.
I hope someone can give me some guidance on what I can do.
ffmpeg - Converting series of images to video (Ubuntu)
I got pictures named as
pic_0_new.jpg
pic_10_new.jpg
pic_20_new.jpg
...
pic_1050_new.jpg
which I want to turn into a video (Ubuntu ffmpeg). I tried the following
ffmpeg -start_number 0 -i pic_%d_new.jpg -vcodec mpeg4 test.avi
but I don't know how to set the step size and the end number. How to do this?
Thanks for help :)
Can someone explain the reorder_queue_size option of rtsp input in ffmpeg?
I can't find any information about it except one sentence in documentation - Set number of packets to buffer for handling of reordered packets.
Can it help with unstable network or stream? What default value is and what value should be set and when?
Révision 24037: Fix #4162 : permettre de tester les plugins qui nécessitent SPIP 3.2.1 & + sur le...
r22712 indiquait que la constante _DEV_VERSION_SPIP_COMPAT est définie à la dernière version stable pendant la phase de dev du trunk, mais on risque certainement d'oublier de la mettre à jour à chaque release stable, donc autant la définir avec un y de version à 99 (* n'étant pas pris en compte ici)
Anomalie #4162 (Fermé): Pouvoir tester des plugins nécessitant SPIP 3.2.1 dans SPIP 3.2 dev
Appliqué par commit r24037.
Not able to decode the audio(mp3) file using C API
I am executing decode_audio.c file. I compiled it successfully. I am getting segmentation fault when I execute. I included the avformat.h header file. I changed the codec logic according to mp3 format. I am using the following command to compile and execute.
mycode$ gcc -o decode_audio decode_audio.c -lavutil -lavformat -lavcodec -lswresample -lz -lm
mycode$ ./decode_audio audio.mp3 raw.bin
What is the reason for this segmentation fault in my program?
I am using Ubuntu 16.04 LTS and ffmpeg 3.4.4 versions. Please help me.
Thanks in advance.
-- decode_audio.cFFMPEG 4 videos merge in one screen
I found a sample that merge 2 videos on one screen
ffmpeg.exe -i 1.mp4 -i 2.mp4 -filter_complex "[0:v]scale=iw/2:ih/2,pad=2*iw:ih[left];[1:v]scale=iw/2:ih/2[right];[left][right]overlay=main_w/2:0[out]" -map [out] -map 0:a? -map 1:a? -b:v 768k output.mp4
I tried this command to merge 4 videos on one screen
ffmpeg.exe -i 1.mp4 -i 2.mp4 -i 3.mp4 -i 4.mp4 -filter_complex "[0:v]scale=iw/2:ih/2,pad=2*iw:ih[upperleft];[1:v]scale=iw/2:ih/2[upperright];[2:v]scale=iw/2:ih/2,pad=2*iw:ih[lowerleft];[3:v]scale=iw/2:ih/2[lowerright];[upperleft][upperright]overlay=main_w/2:0;[lowerleft][lowerright]overlay=main_w/2:0[out]" -map [out] -map 0:a? -map 1:a? -b:v 768k output.mp4
But this generates output similar to 1st command having 2 videos merged in one view. I need all 4 videos to be shown on one screen. Additionally I want audio of 1st video file should be used for output. Please guide
Using ffmpeg to read SRTP input
Related to question and answer in using-ffmpeg-for-stream-encryption-by-srtp-in-windows, I see how to transmit a file as SRTP output flow and it's played with ffplay.
Well, I'm trying to do the opposite operation: I need to launch a ffmpeg that reads SRTP input and saves a mpegtsfile into disk.
I've tried something like this:
Launch ffmpeg to generate a SRTP output flow (same step than previous link)
ffmpeg -re -i input.avi -f rtp_mpegts -acodec mp3 -srtp_out_suite AES_CM_128_HMAC_SHA1_80 -srtp_out_params zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz srtp://127.0.0.1:20000
Launch ffmpeg, not ffplay, to get this output as a new input and save it to a mpegts file, according to ffmpeg-srtp-documentation
ffmpeg -i srtp://127.0.0.1:20000 -srtp_in_suite AES_CM_128_HMAC_SHA1_80 -srtp_in_params zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz -vcodec copy -acodec copy -f mpegts myfile.ts
And I get:
srtp://127.0.0.1:19000: Invalid data found when processing input
Can anyone help me, please?
-- previous link, ffmpeg-srtp-documentationFFMPEG filter complex concat video with slide transition between image
I've been working on video to create slideshow with slide transition, the input receives multiple images. The input will be scaled first before drawtext is drawn on each video, then apply transition effects by using overlay and finally concat the result into one video.
I am having trouble getting the result of the drawtext to make a overlay slide transition
ffmpeg -i, image1.png, -i, image2.png, -filter_complex, nullsrc=size=720x720[background]; [0:v]scale=720:720, setsar=1[scl1]; [1:v]scale=720:720, setsar=1[scl2]; [scl1]zoompan=z=if(lte(zoom,1.0),1.5,max(1.001,zoom - 0.0025)):fps=45:s=720x720:d=360[v0]; [scl2]zoompan=z=if(lte(zoom,1.0),1.5,max(1.001,zoom - 0.0025)):fps=45:s=720x720:d=360[v1] [v0]drawtext=fontfile=Lato-Bold.ttf: text='Example 1' :x=10:y=h-220:fontsize=80:fontcolor=white[text1]; [v1]drawtext=fontfile=Lato-Bold.ttf: text='Example 2' :x=10:y=h-220:fontsize=80:fontcolor=white[text2]; [background][text1]overlay=x=min(-w+(t*w/0.5),0):shortest=1[ovr1]; [ovr1][text2]overlay=x=min(-w+(t*w/0.5),0):shortest=1[ovr2]; [ovr1][ovr2]concat=n=2:v=1:a=0 format=yuv420p[video] -map [video] outputvideo.mp4
I got error saying that my label was invalid
[png_pipe @ 0xf3fc2000] Invalid stream specifier: ovr1. Last message repeated 1 times
Stream specifier 'ovr1' in filtergraph description
Download TS files from video stream
Videos on most sites make use of progressive downloading, which means that the video is downloaded to my computer, and easy to trace. There are lots of extensions out there to do this, and even in the dev-tools this is easily done.
On certain websites videos are streamed. which means that we do no just download 1 file, we download lots of small packages. In the dev-tools these packages can be traced. The website I'm interested in is: http://www.rtlxl.nl/#!/goede-tijden-slechte-tijden-10821/c8e2bff7-5a5c-45cb-be2b-4b3b3e866ffb.
-The packages have a .TS extension.
-Packages can be saved by copying the url of the request
-I can not play these files.
I must have done something wrong, or I'm missing something. I want to know what I am doing wrong. I want to create a chrome extension for personal use which captures the urls of all the packages. when I have all the urls I want to pass them on to a php scripts which downloads them and uses ffmpeg to paste them into a mp4 file.
Please help me get the packages.
-- http://www.rtlxl.nl/#!/goede-tijden-slechte-tijden-10821/c8e2bff7-5a5c-45cb-be2b-4b3b3e866ffbGaps when recording using MediaRecorder API(audio/webm opus)
----- UPDATE HAS BEEN ADDED BELOW -----
I have an issue with MediaRecorder API (https://www.w3.org/TR/mediastream-recording/#mediarecorder-api).
I'm using it to record the speech from the web page(Chrome was used in this case) and save it as chunks. I need to be able to play it while and after it is recorded, so it's important to keep those chunks.
Here is the code which is recording data:
navigator.mediaDevices.getUserMedia({ audio: true, video: false }).then(function(stream) { recorder = new MediaRecorder(stream, { mimeType: 'audio/webm; codecs="opus"' }) recorder.ondataavailable = function(e) { // Read blob from `e.data`, decode64 and send to sever; } recorder.start(1000)
})
The issue is that the WebM file which I get when I concatenate all the parts is corrupted(rarely)!. I can play it as WebM, but when I try to convert it(ffmpeg) to something else, it gives me a file with shifted timings.
For example. I'm trying to convert a file which has duration 00:36:27.78
to wav, but I get a file with duration 00:36:26.04
, which is 1.74s less.
At the beginning of file - the audio is the same, but after about 10min WebM file plays with a small delay.
After some research, I found out that it also does not play correctly with the browser's MediaSource API, which I use for playing the chunks. I tried 2 ways of playing those chunks:
In a case when I just merge all the parts into a single blob - it works fine.
In case when I add them via the sourceBuffer object, it has some gaps (i can see them by inspecting buffered
property). 697.196 - 697.528 (~330ms)
996.198 - 996.754 (~550ms)
1597.16 - 1597.531 (~370ms)
1896.893 - 1897.183 (~290ms)
Those gaps are 1.55s in total and they are exactly in the places where the desync between wav & webm files start. Unfortunately, the file where it is reproducible cannot be shared because it's customer's private data and I was not able to reproduce such issue on different media yet.
What can be the cause for such an issue?
----- UPDATE ----- I was able to reproduce the issue on https://jsfiddle.net/96uj34nf/4/
In order to see the problem, click on the "Print buffer zones" button and it will display time ranges. You can see that there are two gaps: 0 - 136.349, 141.388 - 195.439, 197.57 - 198.589
- 136.349 - 141.388
- 195.439 - 197.57
So, as you can see there are 5 and 2 second gaps. Would be happy if someone could shed some light on why it is happening or how to avoid this issue.
Thank you
-- https://www.w3.org/TR/mediastream-recording/#mediarecorder-api, https://jsfiddle.net/96uj34nf/4/ffmpeg crop videos and combine them
I need to combine 2 videos vertically or horizontally. But before this i need to crop one or two of the videos.
Both video sizes need to be 720x640. I need to combine 2 video that has 720x1280 resolution. I first crop them to 720x640 (crop 320px from top and 320px from bottom), then combine vertically.
I can combine same size videos with the command:
ffmpeg -i 1.mp4 -i 2.mp4 -filter_complex "[0:v]scale=520:-1[v0];[1:v]scale=520:-1[v1];[v0][v1]vstack" -c:v libx264 -crf 23 -preset veryfast output.mp4
This command is working but i need a crop operation for this command.
Any idea?
Thanks
Segmentation fault while decoding the audio(mp3) file using C API
I am executing decode_audio.c file. I compiled it successfully. I am getting segmentation fault when I execute. I included the avformat.h header file. I changed the codec logic according to mp3 format. I am using the following command to compile and execute.
mycode$ gcc -o decode_audio decode_audio.c -lavutil -lavformat -lavcodec -lswresample -lz -lm
mycode$ ./decode_audio audio.mp3 raw.bin
What is the reason for this segmentation fault in my program?
I am using Ubuntu 16.04 LTS and ffmpeg 3.4.4 versions. Please help me.
Thanks in advance.
-- decode_audio.cHow to install FFMPEG for your discord bot?
I want to make my discord bot to play music , but i keep getting "FFMPEG not found" error.
My bot is mostly made out of pings so i won't upload that part. The music code should be this one.
const Discord = require('discord.js');
const bot = new Discord.Client();
var bm = message.content.toLowerCase() bot.on('message',(message) => { if (bm == "pray") { var VC = message.member.voiceChannel; if (!VC) return message.reply("You are not in the church my son.")
VC.join() .then(connection => { const dispatcher = connection.playFile('d:/mp3.MP3'); dispatcher.on("end", end => {VC.leave()}); }) .catch(console.error); )
P.S. : I know that i should import the FFMPEG somehow because i have it downloaded already. But i don't know how.
Vertically or horizontally stack several videos using ffmpeg?
I have two videos of the same exact length, and I would like to use ffmpeg to stack them into one video file.
How can I do this?