I'm using ffmpeg.exe in C#. I want to add a frame or an image after a specific number of frames in a video. Like, I want to add an image after every 100 frames without losing audio of that video. Kindly tell me the command to do this. Thank You.
How to add an image after specific frames in ffmpeg
lavfi/selectivecolor: fix neutral color filtering
lavfi/selectivecolor: fix neutral color filtering Neutrals are supposed to be anything not black (0,0,0) and not white (N,N,N). Previous neutral filtering code was too strict by excluding colors with any of its RGB component maxed instead of just the white color. Reported-by: Royi Avital
Using Hazel to execute ffmpeg (installed via Homebrew) script to convert video to .gif
What I want to do is set Hazel to watch a folder for a new video that I create and then when matched, an embedded FFMPEG script converts the video into a gif.
I have the matching criteria done, Hazel matching rules
I have the ffmpeg recipe done,
ffmpeg -ss 5.0 -t 2.5 -i $1 -r 15 -filter_complex "[0:v] fps=15, scale=500:-1, split [a][b];[a] palettegen [p]; [b][p] paletteuse" $1.gif
But when I put the ffmpeg recipe in the "Embedded Script" dialogue box, I get an error when the match runs.
2018-08-09 18:43:15.818 hazelworker[68549] [Error] Shell script failed: Error processing shell script on file /Users/bengregory/Scripts/khgfygfjhbvmnb.mp4.
2018-08-09 18:43:15.818 hazelworker[68549] Shellscript exited with non-successful status code: -900
I'm not sure if it's relevant to mention that I've install ffmpeg via homebrew
This is what the embedded shell script looks like ffmpeg embedded script
I've been trying to get this to work for weeks and so far not found anything that helps. I read through this article on how to use handbrakeCLI, but no luck Hazel and HandbrakeCLI tutorial
Any help would be greatly received! Cheers
-- Hazel matching rules, ffmpeg embedded script, Hazel and HandbrakeCLI tutorialFFMPEG: Multiple cuts/splices in the same video?
Sorry in advance if this was a duplicate. I couldn't find the solution on Google, since the question is weird to word.
Anyway, can you use ffmpeg commands to splice videos?
For example...
ffmpeg -i MOVIE.mp4 -ss 00:00:00.000 -to 00:06:14.000 -ss 00:07:00.000 -to 00:07:15.000
You could have multiple -ss and -to commands to basically designate multiple cuts in the video, so that the final result would be from 0:0 to 6:14, and then after that, directly skip to 7:00 and end finally at 7:15. Does that make sense?
I know you can use real editors for this, but that's a bit more time consuming than to just simply do it here with a command. However, if it doesn't have this feature, it's not a big deal, I was just wondering.
Thanks!
Ffmpeg : Repeate audio loop until image slideshow not finish
I am working on image slide show with Audio in background, it is working fine but i want it start audio again until slide show not finish.
This is command that i am using for creating slide show.
{"-y", "-r", "1/" + duration, "-i", imgPath + "/frame_%5d.jpg", "-ss", "0", "-i", audioPath, "-map", "0:0", "-map", "1:0", "-vcodec", "libx264", "-r", "2", "-pix_fmt", "yuv420p", "-shortest", "-preset", "ultrafast", outputPath}
Compressed mp4 video is taking too long time to play (exoplayer)
Video(mp4) is recorded from android camera and sent to backend, here I am using ffmpeg wrapper to compress the video[44mb video to 5.76mb]. compression is working well, But when I send the video for play in android(exo player), is taking too long time to start.
below is my code to compress:
FFmpegBuilder builder = new FFmpegBuilder() .setInput("D:/dummyVideos/myvideo.mp4") // Filename, or a FFmpegProbeResult .overrideOutputFiles(true) // Override the output if it exists .addOutput("D:/dummyVideos/myvideo_ffmpeg.mp4") // Filename for the destination .setFormat("mp4") // Format is inferred from filename, or can be set .disableSubtitle() // No subtiles .setAudioChannels(1) // Mono audio .setAudioCodec("aac") // using the aac codec .setAudioSampleRate(48_000) // at 48KHz .setAudioBitRate(32768) // at 32 kbit/s .setVideoCodec("libx264") // Video using x264 .setVideoFrameRate(24, 1) // at 24 frames per second .setVideoResolution(1280, 720) // at 640x480 resolution .setVideoBitRate(762800) .setStrict(FFmpegBuilder.Strict.EXPERIMENTAL) // Allow FFmpeg to use experimental specs .done();
Can anyone tell me why video is taking too long time to play in exo player? Is anything wrong in the compression?
lavc/hevcdec: add ONLY_IF_THREADS_ENABLED where it is missing.
Extract images from mkv video with ffmpeg at every 20 minutes
I used the following command
ffmpeg -i "d:\pathto.mkv" -vf fps=1/1200 "D:\path\thumb%04d.png"
but is not working and it takes too long, no image is generated.
ffmpeg gif to mp4 in javascript
how to convert gif to mp4 in javascript with ffmpeg ? Please thank you very much
Anomalie #4165 (Nouveau): Imagick et serveurs mutualisés
Problème rencontré avec la version 3.1 sur un Spip mutualisé
Les vignettes ne sont pas générées par imagick car les documents d'origine et cible ne sont pas trouvés par Imagick.
Solution : utiliser des url absolues pour trouver les fichiers.
Pour cela, je place par getcwd().'/'. devant le répertoire d'un fichier à traiter lorsque cela est nécessaire.
Par exemple, dans ecrire/inc/filtres_images_lib_mini.jpg ligne 1002 :
$imagick->readImage(getcwd().'/'.$image);
$imagick->resizeImage($destWidth, $destHeight, Imagick::FILTER_LANCZOS,
1);//, IMAGICK_FILTER_LANCZOS, _IMG_IMAGICK_QUALITE / 100);
$imagick->writeImage(getcwd().'/'.$vignette);
Testé sur un Spip mutualisé et sur un Spip normal. les 2 fonctionnent.
ffmpeg install within existing Node.js docker image
I need to use ffmpeg in a Node.js application that runs in a docker container (created using docker-compose). I'm very new to Docker, and would like to know how to command Docker to install ffmpeg when creating the image.
DockerFile
FROM node:carbon
WORKDIR /usr/src/app # where available (npm@5+)
COPY package*.json ./
RUN npm install -g nodemon
RUN npm install --only=production
COPY . . EXPOSE 3000
CMD [ "npm", "start" ]
package.json:
{ "name": "radcast-apis", "version": "0.0.1", "private": true, "scripts": { "start": "node ./bin/www", "dev": "nodemon --inspect-brk=0.0.0.0:5858 ./bin/www" }, "dependencies": { "audioconcat": "^0.1.3", "cookie-parser": "~1.4.3", "debug": "~2.6.9", "express": "~4.16.0", "firebase-admin": "^5.12.1", "http-errors": "~1.6.2", "jade": "~1.11.0", "morgan": "~1.9.0" }, "devDependencies": { "nodemon": "^1.11.0" }
}
docker-compose.yml:
version: "2"
services: web: volumes: - "./app:/src/app" build: . command: npm run dev ports: - "3000:3000" - "5858:5858"
FFmpeg command works locally but not on Azure Batch Service
I have a command that generates a video with background and text on it with FFmpeg and would like to render it using Azure Batch Service. Locally my command works:
./ffmpeg -f lavfi -i color=c=green:s=854x480:d=7 -vf "[in]drawtext=fontsize=46:fontcolor=White:text=dfdhjf dhjf dhjfh djfh djfh:x=(w-text_w)/2:y=((h-text_h)/2)-48,drawtext=fontsize=46:fontcolor=White:text= djfh djfh djfh djfh djf jdhfdjf hjdfh djfh jd fhdj:x=(w-text_w)/2:y=(h-text_h)/2,drawtext=fontsize=46:fontcolor=White:text=fh:x=(w-text_w)/2:y=((h-text_h)/2)+48[out]" -y StoryA.mp4
while the one generated programatically with C# and added as a task in batch service retursn failure:
cmd /c %AZ_BATCH_APP_PACKAGE_ffmpeg#3.4%\ffmpeg-3.4-win64-static\bin\ffmpeg -f lavfi -i color=c=green:s=854x480:d=7 -vf "[in]drawtext=fontsize=46:fontcolor=White:text=dfdhjf dhjf dhjfh djfh djfh:x=(w-text_w)/2:y=((h-text_h)/2)-48,drawtext=fontsize=46:fontcolor=White:text= djfh djfh djfh djfh djf jdhfdjf hjdfh djfh jd fhdj:x=(w-text_w)/2:y=(h-text_h)/2,drawtext=fontsize=46:fontcolor=White:text=fh:x=(w-text_w)/2:y=((h-text_h)/2)+48[out]" -y StoryA.mp4
The ffmpeg configuration works, and also the Pool as I've already tested it with simpler ffmpeg commands which had input and output files. This command doesnt have input file, maybe that is part of the problem ?
Thank you
JavaFx MediaPlayer can't play my mp3 and m4a files
I record some .wav files from microphone, and convert it to mp3 and m4a files. These files can be played with my desktop player correctly.
Then in my JavaFX program, I play them as:
String fileUri = file.toURI().toString();
Media media = new Media(fileUri);
MediaPlayer mediaPlayer = new MediaPlayer(media);
mediaPlayer.play();
But there is no sound, and no errors.
I use ffmpeg
to view them:
ffmpeg -i demo.m4a
Input #0, aac, from 'demo.m4a': Duration: 00:00:54.00, bitrate: 132 kb/s Stream #0:0: Audio: aac (LC), 44100 Hz, stereo, fltp, 132 kb/s
ffmpeg -i hello.mp3
Input #0, mp3, from 'hello.mp3': Metadata: encoder : Lavf57.83.100 Duration: 00:00:01.12, start: 0.069063, bitrate: 49 kb/s Stream #0:0: Audio: mp3, 16000 Hz, stereo, s16p, 48 kb/s
Not sure where is wrong.
FFMPEG error when making animations with certain frame dimensions
I have been using ffmpeg to successfully generate animations of png images with a size of 7205x4308 with the following command:
-framerate 25 -f image2 -start_number 1 -i fig%4d.png -f mp4 -vf scale=-2:ih -vcodec libx264 -pix_fmt yuv420p 2015-2018.mp4
When I try to run the same command for a group of images with a different size, e.g., 6404x5575, I get the following error:
Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
Conversion failed!
I have concluded that the reason it is failing has something to do with the frame size because that is the only thing that is different between the first successful animation and the one that is failing. But, my intuition could be wrong(?). I have tried to remove the scaling parameter in the command but I get the same error.
I am using ffmpeg version 3.4.2 on Mac OSX 10.13 via python.
Any help would be much appreciated. Thanks!
JavaFx MediaPlayer can't play my mp3 and m4a files converted by ffmpeg
I record some .wav files from microphone, and convert it to mp3 and m4a files. These files can be played with my desktop player correctly.
Then in my JavaFX program, I play them as:
String fileUri = file.toURI().toString();
Media media = new Media(fileUri);
MediaPlayer mediaPlayer = new MediaPlayer(media);
mediaPlayer.play();
But there is no sound, and no errors.
I use ffmpeg
to view them:
ffmpeg -i demo.m4a
Input #0, aac, from 'demo.m4a': Duration: 00:00:54.00, bitrate: 132 kb/s Stream #0:0: Audio: aac (LC), 44100 Hz, stereo, fltp, 132 kb/s
ffmpeg -i hello.mp3
Input #0, mp3, from 'hello.mp3': Metadata: encoder : Lavf57.83.100 Duration: 00:00:01.12, start: 0.069063, bitrate: 49 kb/s Stream #0:0: Audio: mp3, 16000 Hz, stereo, s16p, 48 kb/s
And use this command to convert by ffmpeg:
ffmpeg -i hello.wav hello.mp3
Not sure where is wrong.
Update: finally I use this command to generate mp3 which can be played by JavaFx
ffmpeg -i hello.wav -f mp2 hello.mp3
(also can add -c:a libmp3lame
to generate smaller size mp3)
Seems like JavaFx only supports mp2
format of mp3 files.
How can I quantitatively measure gstreamer H264 latency between source and display?
I have a project where we are using gstreamer , x264, etc, to multicast a video stream over a local network to multiple receivers (dedicated computers attached to monitors). We're using gstreamer on both the video source (camera) systems and the display monitors.
We're using RTP, payload 96, and libx264 to encode the video stream (no audio).
But now I need to quantify the latency between (as close as possible to) frame acquisition and display.
Does anyone have suggestions that use the existing software?
Ideally I'd like to be able to run the testing software for a few hours to generate enough statistics to quantify the system. Meaning that I can't do one-off tests like point the source camera at the receiving display monitor displaying a high resolution and manually calculate the difference...
I do realise that using a pure software-only solution, I will not be able to quantify the video acquisition delay (i.e. CCD to framebuffer).
I can arrange that the system clocks on the source and display systems are synchronised to a high accuracy (using PTP), so I will be able to trust the system clocks (else I will use some software to track the difference between the system clocks and remove this from the test results).
In case it helps, the project applications are written in C++, so I can use C event callbacks, if they're available, to consider embedding system time in a custom header (e.g. frame xyz, encoded at time TTT - and use the same information on the receiver to calculate a difference).
FFMPEG : Fill/Change (part of) audio waveform color as per actual progress with respect to time progress
I am trying to make command which is generating waveform from mp3 file and show on background image and play audio. Togethr with this, I want to change waveform color left to right (something like progressbar) as per overall video time elapses.
I have created following command which shows progress bar using drawbox to fill box color as per current time position.
ffmpeg -y -loop 1 -threads 0 -i sample_background.png -i input.mp3 -filter_complex "color=red@0.5:s=1280x100[Color];[0:v]drawbox=0:155:1280:100:gray@1:t=fill[baserect];[1:a]aformat=channel_layouts=mono,showwaves=s=1280x100:rate=7:mode=cline:scale=sqrt:colors=0xffffff[waveform]; [baserect][waveform] overlay=0:155 [v1];[v1][Color] overlay=x='if(gte(t,0), -W+(t)*64, NAN)':y=155:format=yuv444[v2]" -map "[v2]" -map 1:a -c:v libx264 -crf 35 -ss 0 -t 20 -c:a copy -shortest -pix_fmt yuv420p -threads 0 output_withwave_and_progresbar.mp4
But I want to show progress inside generated audio waveform instead of making / filling rectangle using drawbox.
So I have tried to make 2 waveform of 2 different color and overlay on each other and I wanted to show such a way that top waveform should display only part from x position (left) respective to current time.
ffmpeg -y -loop 1 -threads 0 -i sample_background.png -i input.mp3 -filter_complex "[0:v]drawbox=0:155:1280:100:gray@1:t=fill[baserect];[1:a]aformat=channel_layouts=mono,showwaves=s=1280x100:rate=7:mode=cline:scale=sqrt:colors=0xff0000[waveform];[1:a]aformat=channel_layouts=mono,showwaves=s=1280x100:rate=7:mode=cline:scale=sqrt:colors=0xffffff[waveform2]; [baserect][waveform] overlay=0:155 [v1];[v1][waveform2] overlay=x='if(gte(t,0), -W+(t)*64, NAN)':y=155:format=yuv444[v2]" -map "[v2]" -map 1:a -c:v libx264 -crf 35 -ss 0 -t 20 -c:a copy -shortest -pix_fmt yuv420p -threads 0 test.mp4
But I am not able to find way to do Wipe effect from left to right, currently it is sliding (as I am changing x of overlay) It might be done using alpha merge and setting all other pixel to transparent and only show pixels which are less than x pos. but I am not able to find how to do this.
we can use any mp3 file file, currently I have set 20 sec duration.
Can someone please guide how we can do this?
Thanks.
--

How to convert all new mp4 to a directory using ffmpeg? [on hold]
What bash script can use to accomplish this task through ffmpeg?
I want my script to run as a loop and check if there are new mp4 videos in the directory.
Afterwards, move the new files to another folder, if the file doesn't exist there yet.
If possible, I want to add a watermark to the video.
I have the code to add more files. I want to include it next to the script that does the conversion of the new files.
I created a bash file with the following data:
#! /bin/bash srcExt=$1
destExt=$2 srcDir=$3
destDir=$4 opts=$5 for filename in "$srcDir"/*.$srcExt; do basePath=${filename%.*} baseName=${basePath##*/} ffmpeg -i "$filename" $opts "$destDir"/"$baseName"."$destExt" -i logo.png -filter_complex overlay done echo "Conversion from ${srcExt} to ${destExt} complete!"
Later i then run the file:
./ffmpeg-batch.sh mp4 mp4 /folder-source-file/ /folder-output-files/ '-ab 128k'
Can ffmpeg show a progress bar?
I am converting a .avi file to .flv file using ffmpeg. As it takes a long time to convert a file I would like to display a progress bar. Can someone please guide me on how to go about the same.
I know that ffmpeg somehow has to output the progress in a text file and I have to read it using ajax calls. But how do I get ffmpeg to output the progress to the text file?
Thank you very much.
Converting mp3 and jpg to mp4 with h.264 and aac.
I have over 500 sermons in audio format (.mp3). I want to use FFMPEG because its free and I like the software.
I am trying to combine the .mp3s with a jpg (picture of my church) to create .mp4 for Youtube.
I know that YouTube usually does well with Audio format:aac or mp3 and Video format:AVC with h.264.
here is the code:
FOR %%a IN (audio/*.mp3) DO ( echo Converting: %%a ffmpeg -i image/img.jpg -i "%%a" -c:v copy -c:a -vcodec libx264 -acodec aac -strict -2 videos/%%~na.mp4 ) echo Finished
I have the FFMPEG with a do loop, but I am getting this error, [NULL @ 000002574bea6b40] Unable to find a suitable output format for 'libx264' libx264: Invalid argument
I have the latest version. Anytakers?