Does anyone have any experence with this type of thing? Currently using a Ryzen 7 8c/16T CPU, and using a program called BES. Using 10% CPU, is almost no slower at transcoding from mkv to mp4 (both x264, i know i can just copy them) than using 100% cpu on all 16 threads. So, Since this is the case, What is the ideal core count for libx264? I mean, Does the same type of thing happen with 32 threads? Or 8 threads, etc.
-- BESffmpeg x264 CPU core scaling
FFMPEG: How to encode for seekable video at high key frame interval
I'm looking for an ffmpeg comand that's best used if I'm controlling a video to mouse control on "requestAnimationFrame". Basically, it needs to be fast-seeking and encoded at a high key frame interval. I can't seem to nail down what parameters aid in fast-seeking and high key frames.
thanks! Johnny
FFMPEGmediametadataretriever not getting all frames from video
I am developing an application that takes a local video file and extracts its frames and write on disk. I am using ffmpegmediametadataretriever
to extract frames from videos. I have done the following code
retriever.setDataSource(activity, uri); Log.e("duration ->", retriever.extractMetadata(FFmpegMediaMetadataRetriever.METADATA_KEY_DURATION)); long duration = Long.parseLong(retriever.extractMetadata(FFmpegMediaMetadataRetriever.METADATA_KEY_DURATION)); int everyNFrame = 1; double frameRate = Double.parseDouble(retriever.extractMetadata(FFmpegMediaMetadataRetriever.METADATA_KEY_FRAMERATE)); Log.e("all metadata", retriever.getMetadata().getAll().toString()); long sec = Math.round(1000 * 1000 / (frameRate)); Bitmap bitmap;
// Bitmap bitmap2;
// Log.e(" timeskip ", sec + " ----------- " + (frameRate * 1000)); for (long i = 1000; i < duration * 1000; i += sec)
// for (long i = sec; i < duration * 1000 && !stopWorking; i += sec)//30*sec)
// for(int i=1000000;i", path + "/img_" + i + ".jpg"); bitmap.compress(Bitmap.CompressFormat.JPEG, 100, out); bitmap.recycle(); Thread.sleep(75); } catch (Exception e) { e.printStackTrace(); } } } }
It is not extracting all the frames from the video. for some videos frames extracted are repeated. I have gone through all the answers I could find on stackoverflow and other websites to find solution but the issue is there.
--ffmpegmediametadataretriever
ffprobe output video: png
ffprobe is telling me that my video file is a png.
[png_pipe @ 0x7f9ece003c00] Stream #0: not enough frames to estimate rate; consider increasing probesize
Input #0, png_pipe, from '1.ts': Duration: N/A, bitrate: N/A Stream #0:0: Video: png, rgb24(pc), 1x1 [SAR 3779:3779 DAR 1:1], 25 tbr, 25 tbn, 25 tbc
I'm a little bit confused, as it plays fine as a ts or mpeg file. But when I run ffmpeg -y -i in.ts -acodec copy -vcodec copy out.mp4
the command completes fine, but I end up with a file that can't be played. I get an alert that says "The operation could not be completed" from Quicktime, and I can't open it in Chrome or Firefox either so I know it's not an issue with Quicktime.
So, this probably has to do with the video being a png video. I always thought that png was a format for images only, but here I am. Can someone give me some info on this, and how can I convert it to an mp4?
--
FFmpeg to Shoutcast - Right URL syntax
I'm trying to stream mp3 audio to a shoutcast server. I'm able to do that with icecast servers but the same syntax apparently can't work for shoutcast ones. The main issues is: i have an hostname(IP), a port and a password. Stop. With icecast i use icecast://username:pass@host:ip/mount. But, for shoutcast, where do i have to put the password in a stream url? I tried a few things like: http://pass@host:port or http://host:port/pass but they did not work.
Is anybody able to do this? Thank you! :)
P.S. Here is the command i run for stream to Icecast:
ffmpeg -f alsa -i pulse -c:a libmp3lame -ar 44100 -ab 128k -content_type 'adui/mpeg' -f mp3 icecast://user:pass@host:ip/mount
-- http://pass@host:port, http://host:port/pass
stream mp4 file with ffmpeg from a specific time of video
I Want to stream a video file (.mp4) from a differnt starting time.
For example I want to stream test.mkv
file from minute like 00:02:30
of test.mkv
video.
So when I stream it to rtmp server, the video is started from 00:02:30
of movie not start.
Note: I don't want to wait for that long, I want to start from that moment right after i pressed enter on ffmpeg
command, So answers like using cronjob
are not useful.
Here is the ffmpeg
command i'm using:
ffmpeg -i test.mkv -pix_fmt yuv420p -vsync 1 -threads 0 -vcodec libx264 -r 30 -g 60 -sc_threshold 0 -b:v 512k -bufsize 640k -maxrate 640k -preset veryfast -profile:v baseline -tune film -acodec aac -b:a 128k -ac 2 -ar 48000 -af "aresample=async=1:min_hard_comp=0.100000:first_pts=0" -bsf:v h264_mp4toannexb -f flv rtmp://test.server.com
Note:
If you guys have any suggestions on improving ffmpeg
commmand, also I appreciate it.
Python FFmpeg: Single static image coupled with audio outputs massive file size
I'm using this Python library to programmatically generate a video using a single static image (.PNG - 3.05 MB - 1920 x 1080) and an audio track (.WAV - pcm_s24le (24 bit) - 48000 Hz - 34.6 MB) as input.
I'm using this technique to speed up the video generation process.
However, the final file size of output_video_final
is 2.33 GB. Considering my input file sizes (.PNG - 3.05 MB / .WAV - 34.6 MB), why is the final .MOV output so large?
Here's my code:
''' Generate .MOV using static image as input ''' image = ffmpeg.input(input_image, loop='1', t='00:00:1', framerate='24000/1001', probesize='42M') output = ffmpeg.output(image, output_video, f='mov', vcodec='prores_ks', vprofile='3', pix_fmt='yuv422p10le', g='120', video_track_timescale='24000', movflags='use_metadata_tags', timecode='00:00:00:00', color_primaries='bt709', color_trc='bt709', colorspace='bt709', qcomp='1', preset='veryfast', bsf='prores_metadata=color_primaries=bt709:color_trc=bt709:colorspace=bt709', vf='scale=in_range=full:in_color_matrix=bt709:out_range=full:out_color_matrix=bt709') output.run() ''' Generate .MOV using static image .MOV in previous output and combine with audio input ''' audio = ffmpeg.input(input_audio, filter_complex='channelsplit') video = ffmpeg.input(output_video, t='00:02:06', stream_loop='126') output = ffmpeg.output(video, audio, output_video_final, vcodec='copy', acodec='pcm_s24le', audio_bitrate=bitrate) output.run()
-- this Python library
Encode video frame by frame using FFmpeg
I'm trying to encode a WebRTC video. WebRTC provides this method
@Override
public void onFrame(VideoFrame frame){ //FFmpeg command
}
Each VideoFrame is YUV420 and provides information about rotation(integer), timestamp(long), width and height(https://github.com/webrtc-uwp/webrtc/blob/master/sdk/android/api/org/webrtc/VideoFrame.java).
Is it possible to use the above event-method to encode the Video using FFmpeg?
Obviously, video frames would be provided dynamically maybe that's an issue.
-- https://github.com/webrtc-uwp/webrtc/blob/master/sdk/android/api/org/webrtc/VideoFrame.javaHow to combine The video and audio files in ffmpeg-python
I'm trying to combine a video(with no sound) and its separate audio file
I've tried ffmpeg ffmpeg -i video.mp4 -i audio.mp4 -c copy output.mp4
and it works fine.
i'm trying to achieve the same output from ffmpeg-python but with no luck. Any help on how to do this?
Python FFmpeg: Unable to set audio output language
I'm using this Python library to programmatically generate a .MOV using a single .WAV (pcm_s24le - 24 bit - 48000 Hz) as input.
I've already asked a few questions relating to other aspects of my video pipeline, seen here and here.
All I'm trying to do is assign the eng
language tag to a single audio stream in the .MOV output.
Here's my code:
audio = ffmpeg.input(input_audio) output = ffmpeg.output(audio, output_audio, acodec='copy', audio_bitrate=bitrate, metadata='s:a:0 language=eng') output.run()
The .MOV output this generates displays the following via FFprobe:
Stream #0:0: Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, stereo, s32 (24 bit), 2304 kb/s (default)
However, when I run the same input file using the same options/parameters via command line:
ffmpeg -y -i input_audio.wav -c:a copy -metadata:s:a:0 language=eng output_audio.mov
FFprobe states the stream language is eng
:
Stream #0:0(eng): Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, stereo, s32 (24 bit), 2304 kb/s (default)
Why is the command line approach outputting Stream #0:0(eng)
but not the programmatic approach?
google speech to text errors out (grpc invalid deadline NaN)
I have a ffmpeg script that cuts an audio file into a short 5 second clip, however after I cut the file, calling the google speech recognize
command errors out.
Creating a clip - full code link:
const uri = 'http://traffic.libsyn.com/joeroganexp/p1400.mp3?dest-id=19997';
const command = ffmpeg(got.stream(uri));
command .seek(0) .duration(5) .audioBitrate(128) .format('mp3')
...
which works fine and creates ./clip2.mp3
.
I then take that file and upload it to speech to text api and it times out (script here. When I put timeout
and maxRetries
argument I can get the actual error:
Error: 2 UNKNOWN: Getting metadata from plugin failed with error: Deadline is too far in the future at Object.callErrorFromStatus (/Users/jamescharlesworth/Downloads/demo/node_modules/@grpc/grpc-js/build/src/call.js:30:26) at Http2CallStream. (/Users/jamescharlesworth/Downloads/demo/node_modules/@grpc/grpc-js/build/src/client.js:96:33) at Http2CallStream.emit (events.js:215:7) at /Users/jamescharlesworth/Downloads/demo/node_modules/@grpc/grpc-js/build/src/call-stream.js:98:22 at processTicksAndRejections (internal/process/task_queues.js:75:11) { code: 2, details: 'Getting metadata from plugin failed with error: Deadline is too far in the future', metadata: Metadata { internalRepr: Map {}, options: {} }, note: 'Exception occurred in retry method that was not classified as transient'
}
Stepping through the grpc code i see that the deadline is an invalid date.
This seems to be causing the issue but i assume it may be from incorrect params passed into the speech
client.recognize()
method.
A few other things to note:
- The script works for some audio files, not all
- I can upload the broken my clip mp3 clip2.mp3 to the demo app here and it works fine.
- If I change the seek command of my ffmpeg script to start at
0.01
speech recognize command will work (however it breaks other audio clips as its not the correct starting point). I notice that when i do this the png of the mp3 gets stripped out and is a much smaller file size
./clip2.mp3
, here, 
Python FFmpeg: Unable to set audio stream language
I'm using this Python library to programmatically generate a .MOV using a single .WAV (pcm_s24le - 24 bit - 48000 Hz) as input.
I've already asked a few questions relating to other aspects of my video pipeline, seen here and here.
All I'm trying to do is assign the eng
language tag to a single audio stream in the .MOV output.
Here's my code:
audio = ffmpeg.input(input_audio) output = ffmpeg.output(audio, output_audio, acodec='copy', audio_bitrate=bitrate, metadata='s:a:0 language=eng') output.run()
The .MOV output this generates displays the following via FFprobe:
Stream #0:0: Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, stereo, s32 (24 bit), 2304 kb/s (default)
However, when I run the same input file using the same options/parameters via command line:
ffmpeg -y -i input_audio.wav -c:a copy -metadata:s:a:0 language=eng output_audio.mov
FFprobe states the stream language is eng
:
Stream #0:0(eng): Audio: pcm_s24le (in24 / 0x34326E69), 48000 Hz, stereo, s32 (24 bit), 2304 kb/s (default)
Why is the command line approach outputting Stream #0:0(eng)
but not the programmatic approach?
Anomalie #4415 (Nouveau): Liste déroulante statut trop riche pour un admin restreint sur une rubr...
Bonjour,
Je viens de constater ce jour un problème sur https://contrib.spip.net/ecrire/?exec=article&id_article=5223
Je ne suis pas admin restreint de la rubrique.
Et pourtant, la liste déroulante des statuts me présente les 5 statuts.
Alors que je ne devrait pas avoir "Publié en ligne" dedans.
Et si je choisi "Publié en ligne" puis bouton changer, le statut reste à"Proposéà la publication".
Comportement attendu : n'avoir que les options dédiées aux rédacteurs dans les rubriques sur lesquelles je ne suis pas admin restreint.
Vcpkg building FFmpeg with libxml2
First off all im a newbie in this area so if im asking something really stupid or non sense im sorry in advance, so lets get right to the point... im trying to add features to the ffmpeg, and by what i know i have to modify CONTROL file and portfile.cmake but as it seems its not enough i dont understand the errors and i cant get more info about this in the internet so if you could give me some info how to do it properly it would allready mean a lot to me! i will leave the portions of code and error file
**portfile.cmake:**
if("libxml2" IN_LIST FEATURES) set(OPTIONS "${OPTIONS} --enable-libxml2")
endif() **CONTROL:**
Feature: libxml2
Description: Libxml2 is the XML C parser and toolkit developed for the Gnome project (but usable outside of the Gnome platform)
Build-Depends: zlib, libiconv, liblzma **Command Prompt after (vcpkg install ffmpeg[libxml2]:x64-windows)**
CMake Error at scripts/cmake/vcpkg_execute_required_process.cmake:72 (message): Command failed: C:/vcpkg/downloads/tools/msys2/msys64/usr/bin/bash.exe --noprofile --norc C:/vcpkg/ports/ffmpeg\build.sh C:/vcpkg/buildtrees/ffmpeg/x64-windows-rel C:/vcpkg/buildtrees/ffmpeg/src/n4.2-02d8c63f80 C:/vcpkg/packages/ffmpeg_x64-windows "--enable-asm --enable-yasm --disable-doc --enable-debug --enable-runtime-cpudetect --enable-libxml2 --disable-openssl --disable-ffmpeg --disable-ffplay --disable-ffprobe --disable-libvpx --disable-libx264 --disable-opencl --disable-lzma --disable-bzlib --enable-avresample --disable-static --enable-shared --extra-cflags=-DHAVE_UNISTD_H=0 --extra-cflags=-MD --extra-cxxflags=-MD" Working Directory: C:/vcpkg/buildtrees/ffmpeg/x64-windows-rel Error code: 1 See logs for more information: C:\vcpkg\buildtrees\ffmpeg\build-x64-windows-rel-out.log Call Stack (most recent call first): ports/ffmpeg/portfile.cmake:197 (vcpkg_execute_required_process) scripts/ports.cmake:94 (include) Error: Building package ffmpeg:x64-windows failed with: BUILD_FAILED
Please ensure you're using the latest portfiles with `.\vcpkg update`, then
submit an issue at https://github.com/Microsoft/vcpkg/issues including: Package: ffmpeg:x64-windows Vcpkg version: 2019.09.12-nohash**strong text** **Log Error file:**
=== CONFIGURING === ERROR: libxml-2.0 not found using pkg-config If you think configure made a mistake, make sure you are using the latest version from Git. If the latest version fails, report the problem to the ffmpeg-user@ffmpeg.org mailing list or IRC #ffmpeg on irc.freenode.net. Include the log file "ffbuild/config.log" produced by configure as this will help solve the problem.
Thank you for your attention, Regards Pedro Cunha!
FFMPEG images to video outputs junk
I'm very new to using FFMPEG and I'm trying to make an image sequence into a video. However, none of my attempts were succesful. I first tried 825x480 .pngs which resulted in and then 512x512 .jpgs, pretty much the same result. I tried a few codes like
ffmpeg -framerate 24 -i img%03d.png output.mp4
, ffmpeg -r 1/5 -framerate 24 -i img%03d.png -c:v libx264 -vf -pix_fmt yuv420p output.mp4
and etc. I just don't really understand what I'm supposed to do to prevent this.
h264_mp4toannexb: Remove unnecessary check
h264_mp4toannexb: Remove unnecessary check There can be at most 31 SPS and 255 PPS in the mp4/Matroska extradata. Given that each has a size of at most 2^16-1, the length of the output derived from these parameter sets can never overflow an ordinary 32 bit integer. So use a simple uint32_t instead of uint64_t and replace the unnecessary check with an av_assert1. Signed-off-by: Andreas RheinhardtSigned-off-by: Michael Niedermayer
h264 lossless coding
Is it possible to do completely lossless encoding in h264? By lossless, I mean that if I feed it a series of frames and encode them, and then if I extract all the frames from the encoded video, I will get the exact same frames as in the input, pixel by pixel, frame by frame. Is that actually possible? Take this example:
I generate a bunch of frames, then I encode the image sequence to an uncompressed AVI (with something like virtualdub), I then apply lossless h264 (the help files claim that setting --qp 0 makes lossless compression, but I am not sure if that means that there is no loss at any point of the process or that just the quantization is lossless). I can then extract the frames from the resulting h264 video with something like mplayer.
I tried with Handbrake first, but it turns out it doesn't support lossless encoding. I tried x264 but it crashes. It may be because my source AVI file is in RGB colorspace instead of YV12. I don't know how to feed a series of YV12 bitmaps and in what format to x264 anyway, so I cannot even try.
In summary what I want to know if that is there a way to go from
Series of lossless bitmaps (in any colorspace) -> some transformation -> h264 encode -> h264 decode -> some transformation -> the original series of lossless bitmaps
If there a way to achieve this?
EDIT: There is a VERY valid point about lossless H264 not making too much sense. I am well aware that there is no way I could tell (with just my eyes) the difference between and uncompressed clip and another compressed at a high rate in H264, but I don't think it is not without uses. For example, it may be useful for storing video for editing without taking huge amounts of space and not losing quality and spending too much encoding time every time the file is saved.
UPDATE 2: Now x264 doesn't crash. I can use as sources either avisynth or lossless yv12 lagarith (to avoid the colorspace compression warning). Howerver, even with --qp 0 and a rgb or yv12 source I still get some differences, minimal but present. This is troubling, because all the information I have found on lossless predictive coding (--qp 0) claims that the whole encoding should be lossless, but I am unable to verifiy this.
Sans titre
How to add day suffix[st,nd,rd,th] in timestamp's date in ffmpeg command?
I am using this command to add time stamp in video:
ffmpeg -y -i input.mp4 -vf "drawtext=fontfile=roboto.ttf:fontsize=36:fontcolor=yellow:text='%{pts\:gmtime\:1575526882\:%d %b, %Y %I\\\:%M %p}'" -preset ultrafast -f mp4 output.mp4
this command generate date & time: 05 Dec, 2019 06:21 AM
but i want to add day suffix after day in date like this : 05th Dec, 2019 06:21 AM
//like 1st,2nd,3rd,4th,5th.... etc
what changes i have to do to achieve this?
Evolution #4412: Ajout d'un site : récuperer son icone en logo
Pas certain que ça soit bien utile de récupérer un favicon (image qui peut mesurer 16 ou 32 px de large dans certains cas) pour illustrer un site. De plus, il existe déjà le plugin https://plugins.spip.net/thumbsites qui permet de récupérer une vignette du site.