Quantcast
Channel: MediaSPIP
Viewing all 118125 articles
Browse latest View live

Anomalie #4167 (Nouveau): Script JVS affiché lors de la redirection après un login réussi en mode...


Create an mp4 video from list of frames using ffmpeg

$
0
0

I have an directory with images that looks like this:

frame0d.jpg
frame1d.jpg
frame2d.jpg
...
frame4297d.jpg

I need to create mp4 file with 30 fps from them.

I've tried command:

ffmpeg -framerate 30 -i frame%04d.jpg video.mp4

But it fails with error:

[image2 @ 0000026fd7c5a000] Could find no file with path 'frame%04d.jpg' and index in the range 0-4
frame%04d.jpg: No such file or directory

Thanks a lot !

How to convert the single image to mp4 video?

$
0
0

How to convert single image to mp4 video.

For ex: I need to play same image for 20 seconds (duration will be dynamic)

I know its possible with ffmpeg. I am searched in google & SO but unfortunately I am not able to find the correct tutorial.

I want just correct direction. Anyone guide me???

Any comments or any other possibilities also welcome

Updated(reproducible) - Gaps when recording using MediaRecorder API(audio/webm opus)

$
0
0

----- UPDATE HAS BEEN ADDED BELOW -----

I have an issue with MediaRecorder API (https://www.w3.org/TR/mediastream-recording/#mediarecorder-api).

I'm using it to record the speech from the web page(Chrome was used in this case) and save it as chunks. I need to be able to play it while and after it is recorded, so it's important to keep those chunks.

Here is the code which is recording data:

navigator.mediaDevices.getUserMedia({ audio: true, video: false }).then(function(stream) { recorder = new MediaRecorder(stream, { mimeType: 'audio/webm; codecs="opus"' }) recorder.ondataavailable = function(e) { // Read blob from `e.data`, decode64 and send to sever; } recorder.start(1000) })

The issue is that the WebM file which I get when I concatenate all the parts is corrupted(rarely)!. I can play it as WebM, but when I try to convert it(ffmpeg) to something else, it gives me a file with shifted timings.

For example. I'm trying to convert a file which has duration 00:36:27.78 to wav, but I get a file with duration 00:36:26.04, which is 1.74s less.

At the beginning of file - the audio is the same, but after about 10min WebM file plays with a small delay.

After some research, I found out that it also does not play correctly with the browser's MediaSource API, which I use for playing the chunks. I tried 2 ways of playing those chunks:

In a case when I just merge all the parts into a single blob - it works fine. In case when I add them via the sourceBuffer object, it has some gaps (i can see them by inspecting buffered property). 697.196 - 697.528 (~330ms) 996.198 - 996.754 (~550ms) 1597.16 - 1597.531 (~370ms) 1896.893 - 1897.183 (~290ms)

Those gaps are 1.55s in total and they are exactly in the places where the desync between wav & webm files start. Unfortunately, the file where it is reproducible cannot be shared because it's customer's private data and I was not able to reproduce such issue on different media yet.

What can be the cause for such an issue?

----- UPDATE ----- I was able to reproduce the issue on https://jsfiddle.net/96uj34nf/4/

In order to see the problem, click on the "Print buffer zones" button and it will display time ranges. You can see that there are two gaps: 0 - 136.349, 141.388 - 195.439, 197.57 - 198.589

  1. 136.349 - 141.388
  2. 195.439 - 197.57

So, as you can see there are 5 and 2 second gaps. Would be happy if someone could shed some light on why it is happening or how to avoid this issue.

Thank you

-- https://www.w3.org/TR/mediastream-recording/#mediarecorder-api, https://jsfiddle.net/96uj34nf/4/

FFMPEG Overlay one video on top of another video at specific location

$
0
0

I have two videos (in mp4 format). I would like to put one video on top of the other video

  • Both videos have the same duration
  • The bottom video is of resolution 640px by 640px
  • The top video is also of resolution 640px by 640px

I need to scale down the top video resolution to 580px by 580px. Then I need to position it in a specific location on top of the bottom video.

I tried the below quote

ffmpeg -i bottom.mp4 -i top.mp4 -filter_complex "[0:0][1:0]overlay=enable='between(t\,0,50)'[out]" -shortest -map [out] -map 0:1 -pix_fmt yuv420p -c:a copy -dn -c:v libx264 -crf 18 output.mp4

It does allow me to put the top video on top of the bottom video I think. But the top video was not scaled down in size. Also, this top video is by default on the top left corner of the bottom video.

In additional, somehow, the very first frame will not show the top video at all. Only at around the 0.5sec will the top video appear. Is there any way to make it such that the top video is showing at the very first frame too?

Thank you all in advance!

FFmpeg skips rendering frames

$
0
0

While I extract frames from a video I noticed that ffmpeg wont finish rendering certain images. The problem ended up being byte "padding" between two jpeg images. If my buffer size is 4096 and if in that buffer are located bytes from previous image and next image and if they are not separated by any number of bytes, then next image is not rendered properly. Why is that?

-i path -f image2pipe -c:v mjpeg -q:v 2 -vf fps=25 pipe:1

enter image description here

Rendered frame:

enter image description here

Code sample:

public void ExtractFrames()
{ string FFmpegPath = "Path..."; string Arguments = $"-i { VideoPath } -f image2pipe -c:v mjpeg -q:v 2 -vf fps=25/1 pipe:1"; using (Process cmd = GetProcess(FFmpegPath, Arguments)) { cmd.Start(); FileStream fStream = cmd.StandardOutput.BaseStream as FileStream; bool Add = false; int i = 0, n = 0, BufferSize = 4096; byte[] buffer = new byte[BufferSize + 1]; MemoryStream mStream = new MemoryStream(); while (true) { if (i.Equals(BufferSize)) { i = 0; buffer[0] = buffer[BufferSize]; if (fStream.Read(buffer, 1, BufferSize) == 0) break; } if (buffer[i].Equals(255) && buffer[i + 1].Equals(216)) { Add = true; } if (buffer[i].Equals(255) && buffer[i + 1].Equals(217)) { n++; Add = false; mStream.Write(new byte[] { 255, 217 }, 0, 2); File.WriteAllBytes($@"C:\Path...\{n}.jpg", mStream.ToArray()); mStream = new MemoryStream(); } if (Add) mStream.WriteByte(buffer[i]); i++; } cmd.WaitForExit(); cmd.Close(); }
} private Process GetProcess(string FileName, string Arguments)
{ return new Process { StartInfo = new ProcessStartInfo { FileName = FileName, Arguments = Arguments, UseShellExecute = false, RedirectStandardOutput = true, CreateNoWindow = false, } };
}

Video sample (> 480p) with length of 60 seconds or higher should be used for testing purposes.

-- enter image description here, enter image description here

Using swscale for image composing

$
0
0

I have an input image A and a resulting image B with the size 800x600 stored in YUV420 format and I need to scale image A into 100x100 size and place it into resulting image B at some point (x=100, y=100). To decrease memory and CPU usage I put swscale result right into final B image.

Here is a code snippets (pretty straightforward):

//here we a creating sws context for scaling into 100x100
sws_ctx = sws_getCachedContext(sws_ctx, frame.hdr.width, frame.hdr.height, AV_PIX_FMT_YUV420P, 100, 100, AV_PIX_FMT_YUV420P, SWS_BILINEAR, nullptr, nullptr, nullptr);

Next we create corresponding slice and strides describing image A

 int src_y_plane_sz = frame.hdr.width * frame.hdr.height; int src_uv_plane_sz = src_y_plane_sz / 2; std::int32_t src_stride[] = { frame.hdr.width, frame.hdr.width / 2, frame.hdr.width / 2, 0}; const uint8_t* const src_slice[] = { &frame.raw_frame[0], &frame.raw_frame[0] + src_y_plane_sz, &frame.raw_frame[0] + src_y_plane_sz + src_uv_plane_sz, nullptr};

Now doing the same for a destination B image

 std::int32_t dst_stride[] = { current_frame.hdr.width, current_frame.hdr.width /2, current_frame.hdr.width /2, 0 }; std::int32_t y_plane_sz = current_frame.hdr.width * current_frame.hdr.height; std::int32_t uv_plane_sz = y_plane_sz / 2; //calculate offset in slices for x=100, y=100 position std::int32_t y_offset = current_frame.hdr.width * 100 + 100; uint8_t* const dst_slice[] = { &current_frame.raw_frame[0] + y_offset, &current_frame.raw_frame[0] + y_plane_sz + y_offset / 2, &current_frame.raw_frame[0] + y_plane_sz + uv_plane_sz + y_offset / 2, nullptr};

After all - calling swscale

 int ret = sws_scale(sws_ctx, src_slice, src_stride, 0, frame.hdr.height, dst_slice, dst_stride);

After using a testing sequence I having some invalid result with the following problems:

  1. Y component got some padding line
  2. UV components got misplaced - they are a bit lower then original Y components.

Artefacts

Does anyone have had the same problems with swscale function? I am pretty new to this FFmpeg library collection so I am open to any opinions how to perform this task correctly.

FFmpeg version used 3.3

-- Artefacts

make video thumbnail firebase cloud function [duplicate]

$
0
0

This question already has an answer here:

i am trying to get thumbnail of image on upload of video in storage. that doesn't seem to be working giving me an error.

const tempThumbnailFilePath=path.join(os.tmpdir(), 'neew.jpg');
return bucket.file(thumbnailName).download({ destination: tempThumbnail, }).then(()=>{
return spawn('ffmpeg', ['-ss', '0', '-i', tempThumbnail, '-f', 'image2', '-vframes', '1', '-vf', 'scale=512:-1', tempThumbnailFilePath], { capture: [ 'stdout', 'stderr' ]}).then((writeResult) => {
console.log('thumnail created'); console.log('[spawn] stdout: ', writeResult.stdout.toString());
}).catch(function (err) { console.log('[spawn] stdout: ', err); });

i get error like this

[spawn] stdout: { Error: spawn ffmpeg ENOENT at exports._errnoException (util.js:1020:11)


Command-line streaming webcam with audio from Ubuntu server in WebM format

$
0
0

I am trying to stream video and audio from my webcam connected to my headless Ubuntu server (running Maverick 10.10). I want to be able to stream in WebM format (VP8 video + OGG). Bandwidth is limited, and so the stream must be below 1Mbps.

I have tried using FFmpeg. I am able to record WebM video from the webcam with the following:

ffmpeg -s 640x360 \
-f video4linux2 -i /dev/video0 -isync -vcodec libvpx -vb 768000 -r 10 -vsync 1 \
-f alsa -ac 1 -i hw:1,0 -acodec libvorbis -ab 32000 -ar 11025 \
-f webm /var/www/telemed/test.webm 

However despite experimenting with all manner of vsync and async options, I can either get out of sync audio, or Benny Hill style fast-forward video with matching fast audio. I have also been unable to get this actually working with ffserver (by replacing the test.webm path and filename with the relevant feed filename).

The objective is to get a live, audio + video feed which is viewable in a modern browser, in a tight bandwidth, using only open-source components. (None of that MP3 format legal chaff)

My questions are therefore: How would you go about streaming webm from a webcam via Linux with in-sync audio? What software you use?

Have you succeeded in encoding webm from a webcam with in-sync audio via FFmpeg? If so, what command did you issue?

Is it worth persevering with FFmpeg + FFserver, or are there other more suitable command-line tools around (e.g. VLC which doesn't seem too well built for encoding)?

Is something like Gstreamer + flumotion configurable from the command line? If so, where do I find command line documentation because flumotion doc is rather light on command line details?

Thanks in advance!

How to use ffmpeg for segmentation of rtsp-stream into mov files

$
0
0

I have a webcam that gives a RTSP-stream and I want to save it into MOV-chunks of lets say 5 sec. The stream is:

rtsp://user:pwd@192.168.1.90:10554/tcp/av0_0

I can open the stream and play it in VLC. I run ffmpeg as below, which looks sane, but the output is black video only. Where did I go wrong?

ffmpeg -rtsp_transport tcp -i "rtsp://user:pwd@192.168.1.90:10554/tcp/av0_0" -f segment -segment_time 5 -segment_format mov -c copy -map 0 video%d.mov
ffmpeg version 3.1.1 Copyright (c) 2000-2016 the FFmpeg developers built with Apple LLVM version 7.3.0 (clang-703.0.31) configuration: --prefix=/usr/local/Cellar/ffmpeg/3.1.1 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-opencl --enable-libx264 --enable-libmp3lame --enable-libxvid --enable-ffplay --disable-lzma --enable-vda libavutil 55. 28.100 / 55. 28.100 libavcodec 57. 48.101 / 57. 48.101 libavformat 57. 41.100 / 57. 41.100 libavdevice 57. 0.101 / 57. 0.101 libavfilter 6. 47.100 / 6. 47.100 libavresample 3. 0. 0 / 3. 0. 0 libswscale 4. 1.100 / 4. 1.100 libswresample 2. 1.100 / 2. 1.100 libpostproc 54. 0.100 / 54. 0.100
Guessed Channel Layout for Input Stream #0.1 : mono
Input #0, rtsp, from 'rtsp://user:pwd@192.168.1.90:10554/tcp/av0_0': Metadata: title : streamed by the RTSP server Duration: N/A, start: 0.230000, bitrate: N/A Stream #0:0: Video: h264 (High), yuv420p, 1280x720, 20 fps, 25 tbr, 90k tbn, 40 tbc Stream #0:1: Audio: pcm_alaw, 8000 Hz, 1 channels, s16, 64 kb/s
[segment @ 0x7fda5e012400] Using AVStream.codec to pass codec parameters to muxers is deprecated, use AVStream.codecpar instead. Last message repeated 1 times
Output #0, segment, to 'video%d.mov': Metadata: title : streamed by the RTSP server encoder : Lavf57.41.100 Stream #0:0: Video: h264, yuv420p, 1280x720, q=2-31, 20 fps, 25 tbr, 10240 tbn, 40 tbc Stream #0:1: Audio: pcm_alaw, 8000 Hz, mono, 64 kb/s
Stream mapping: Stream #0:0 -> #0:0 (copy) Stream #0:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
[segment @ 0x7fda5e012400] Non-monotonous DTS in output stream 0:0; previous: 0, current: -1946; changing to 1. This may result in incorrect timestamps in the output file.
[segment @ 0x7fda5e012400] Non-monotonous DTS in output stream 0:0; previous: 1, current: -1536; changing to 2. This may result in incorrect timestamps in the output file.
[segment @ 0x7fda5e012400] Non-monotonous DTS in output stream 0:0; previous: 2, current: -1126; changing to 3. This may result in incorrect timestamps in the output file.
[segment @ 0x7fda5e012400] Non-monotonous DTS in output stream 0:0; previous: 3, current: -717; changing to 4. This may result in incorrect timestamps in the output file.
[segment @ 0x7fda5e012400] Non-monotonous DTS in output stream 0:0; previous: 4, current: -307; changing to 5. This may result in incorrect timestamps in the output file.
frame= 204 fps= 24 q=-1.0 Lsize=N/A time=00:00:07.89 bitrate=N/A speed=0.937x
video:2057kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Exiting normally, received signal 2.

How to stream all videos in a folder?

$
0
0

Hi i want to stream videos over web using ffserver. i got this link as reference.

Now what i am not able to figure out is how to pass a folder(which content all videos i want to stream) as input to stream all videos. I also want add more videos dynamically to this folder in time to time and streaming should happen(like how it works in Darwin). now i can't use Darwin because it doesn't support for iOS.

please give me a suggestion.

is there any other open source tool by which i can do this?

-- this

Convert form 30 to 60fps by increasing speed, not duplicating frames, using FFmpeg

$
0
0

I have a video that is incorrectly labelled at 30fps, it is actually 60fps and so looks like it's being played at half speed. The audio is fine, that is, the soundtrack finishes half way through the video clip. I'd like to know how, if possible to fix this, that is double the video speed, making it 60fps and meaning that the audio and video are synced.

The file is H.264 and the audio MPEG-4 AAC.

File details as given by ffmpeg, as requested:

ffmpeg version 0.8.9-6:0.8.9-0ubuntu0.13.10.1, Copyright (c) 2000-2013 the Libav developers
built on Nov 9 2013 19:09:46 with gcc 4.8.1
*** THIS PROGRAM IS DEPRECATED ***
This program is only provided for compatibility and will be removed in a future release. Please use avconv instead.
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from './Tignes60fps.mp4': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: isommp42 creation_time : 2014-01-13 02:23:09 Duration: 00:08:33.21, start: 0.000000, bitrate: 5690 kb/s Stream #0.0(eng): Video: h264 (High), yuv420p, 1920x1080 [PAR 1:1 DAR 16:9], 5609 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc
Metadata: creation_time : 2014-01-13 02:23:09 Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16, 156 kb/s
Metadata: creation_time : 2014-01-13 02:23:09
At least one output file must be specified

ffmpeg - skipping frames while watermarking

$
0
0

I am building a video watermark utility based on ffmpeg.

But watermarking takes considerable time. I wish to reduce the watermarking time by skipping frames while processsing.

Is it possible to skip frames while processing video?

Like adding text to only every other frame?

ffmpeg -i "D://sample.mp4" -vf [in]drawtext=fontfile='C\://Windows//Fonts//Calibri.ttf':fontsize=27.9:text=www.example.com:fontcolor=#ffffff:box=1:boxcolor=0x00000099:x=w-tw-10:y=h-th-10 -y D:\watermarked.mp4

c# how to capture audio from nvlc and raise Accord.Audio.NewFrameEventArgs

$
0
0

I'm working on application in c# that record video stream from IP cameras. I'm using Accord.Video.FFMPEG.VideoFileWriter and nVLC C# wrapper. I have a class that capture audio from stream using nVLC, which should implement the IAudioSource interface, so I've used CustomAudioRendere to capture sound data, then raised the event NewFrame that contains the signal object. The problem is when saving the signal to video file, the sound is terrifying(discrete) when record from rtsp stream, but in good quality when record from local mic(from labtop). Here is the code that raises the event:

public void Start() { _mFactory = new MediaPlayerFactory(); _mPlayer = _mFactory.CreatePlayer(); _mMedia = _mFactory.CreateMedia(Source); _mPlayer.Open(_mMedia); var fc = new Func(SoundFormatCallback); _mPlayer.CustomAudioRenderer.SetFormatCallback(fc); var ac = new AudioCallbacks { SoundCallback = SoundCallback }; _mPlayer.CustomAudioRenderer.SetCallbacks(ac); _mPlayer.Play(); } private void SoundCallback(Sound newSound) { var data = new byte[newSound.SamplesSize]; Marshal.Copy(newSound.SamplesData, data, 0, (int)newSound.SamplesSize); NewFrame(this, new Accord.Audio.NewFrameEventArgs(new Signal(data,Channels, data.Length, SampleRate, Format))); } private SoundFormat SoundFormatCallback(SoundFormat arg) { Channels = arg.Channels; SampleRate = arg.Rate; BitPerSample = arg.BitsPerSample; return arg; }

And here is the code that captures the event:

private void source_NewFrame(object sender, NewFrameEventArgs eventArgs) { Signal sig = eventArgs.Signal; duration += eventArgs.Signal.Duration; if (videoFileWrite == null) { videoFileWrite = new VideoFileWriter(); videoFileWrite.AudioBitRate = sig.NumberOfSamples*sig.NumberOfChannels*sig.SampleSize; videoFileWrite.SampleRate = sig.SampleRate; videoFileWrite.FrameSize = sig.NumberOfSamples/sig.NumberOfFrames; videoFileWrite.Open("d:\\output.mp4"); } if (isStartRecord) { DoneWriting = false; using (MemoryStream ms = new MemoryStream()) { encoder = new WaveEncoder(ms); encoder.Encode(eventArgs.Signal); ms.Seek(0, SeekOrigin.Begin); decoder = new WaveDecoder(ms); Signal s = decoder.Decode(); videoFileWrite.WriteAudioFrame(s); encoder.Close(); decoder.Close(); } DoneWriting = true; } }

thanks in advanced.

Revision 111302: Suppression d'une notice PHP. En fait, le commit précédent n'avait pas ...

$
0
0
Suppression d'une notice PHP. En fait, le commit précédent n'avait pas l'air de fonctionner. Sinon il y a un bug de longue date a priori sur la valeur par défaut quand on utilise la saisie en PHP. -- Log

Using ffmpeg to change framerate

$
0
0

I am trying to convert a video clip (MP4, yuv420p) from 30 fps to 24 fps. The number of frames is correct so my output should change from 20 minutes at 30fps to 25 minutes at 24fps. Everything else should remain the same.

Try as I might everything I try with ffmpeg converts the frame rate but changes the number of frames to keep the same duration or changes the duration without altering the framerate.

So I have been typically trying things like;

ffmpeg -y -r 30 -i seeing_noaudio.mp4 -r 24 seeing.mp4

(I'm doing this on windows but normally would be on linux). That converts the framerate but drops frames so the total duration is unaltered.

Or I have tried

ffmpeg -y -i seeing_noaudio.mp4 -filter:v "setpts=1.25*PTS" seeing.mp4

Which changes the duration but not the framerate.

Surely I should be able to do this with a single ffmpeg command without having to reencode or even as some people suggested going back to the original raw frames.

Help please

How to setup ffmpeg to read data from pipe permanently?

$
0
0

I have the following args for mmpeg: -y -f rawvideo -vcodec rawvideo -pixel_format rgba -colorspace bt709 -video_size 1280x720 -i - -vcodec libx264 -preset ultrafast -tune zerolatency -f flv -listen 1 rtmp://127.0.0.1 - ffmpeg is waiting for incoming connections. If nobody is connected to the ffmpeg then ffmpeg doesn't read data from pipe and my data source is hanging. How to say ffmpeg to read data from pipe permanently even without connected clients?

Change the framerate of a video including audio without re-encoding using ffmpeg

$
0
0

I have a 30.02 FPS .mp4-video and I'm trying to convert it to 30 FPS. I found this question but if I try the method like described in the first answer the audio gets lost because the video is getting converted in a .h264-file which can't contain audio as I read here.

(I'm sorry if i've got issues with the english language, it isn't my mother language)

EDIT: Here goes the content I enter in the terminal and the output that returns

Input: ffmpeg -y -i 01.mp4 -f h264 -c copy temp1.h264

Output: (I replaced the real paths with path\to\...)

ffmpeg version N-91589-ge0539f0349 Copyright (c) 2000-2018 the FFmpeg developers
built with gcc 8.2.1 (GCC) 20180808
configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-bzlib --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth
libavutil 56. 18.102 / 56. 18.102
libavcodec 58. 22.101 / 58. 22.101
libavformat 58. 17.101 / 58. 17.101
libavdevice 58. 4.101 / 58. 4.101
libavfilter 7. 26.100 / 7. 26.100
libswscale 5. 2.100 / 5. 2.100
libswresample 3. 2.100 / 3. 2.100
libpostproc 55. 2.100 / 55. 2.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'path\to\01.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf58.17.101
Duration: 00:02:42.01, start: 0.000000, bitrate: 5168 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1280x720, 5000 kb/s, 30.02 fps, 30 tbr, 16k tbn, 60 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 159 kb/s (default)
Metadata:
handler_name : SoundHandler
Output #0, h264, to 'path\to\temp1.h264':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf58.17.101
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1280x720, q=2-31, 5000 kb/s, 30.02 fps, 30 tbr, 30 tbn, 30 tbc (default)
Metadata:
handler_name : VideoHandler
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
frame= 4862 fps=0.0 q=-1.0 Lsize= 98859kB time=00:02:42.00 bitrate=4999.1kbits/s speed= 351x
video:98860kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown

Input: ffmpeg -y -r 30 -i "temp1.h264" -c copy "temp1.mp4"

Output: (I replaced the real paths with path\to\...)

ffmpeg version N-91589-ge0539f0349 Copyright (c) 2000-2018 the FFmpeg developers
built with gcc 8.2.1 (GCC) 20180808
configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-bzlib --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth
libavutil 56. 18.102 / 56. 18.102
libavcodec 58. 22.101 / 58. 22.101
libavformat 58. 17.101 / 58. 17.101
libavdevice 58. 4.101 / 58. 4.101
libavfilter 7. 26.100 / 7. 26.100
libswscale 5. 2.100 / 5. 2.100
libswresample 3. 2.100 / 3. 2.100
libpostproc 55. 2.100 / 55. 2.100
Input #0, h264, from 'path\to\temp1.h264':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: h264 (High), yuv420p(progressive), 1280x720, 30 fps, 30 tbr, 1200k tbn, 60 tbc
Output #0, mp4, to 'path\to\temp1.mp4':
Metadata:
encoder : Lavf58.17.101
Stream #0:0: Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 1280x720, q=2-31, 30 fps, 30 tbr, 15360 tbn, 30 tbc
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
[mp4 @ 0000018e11422ec0] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly
[mp4 @ 0000018e11422ec0] pts has no value
Last message repeated 156 times
[mp4 @ 0000018e11422ec0] pts has no valueB time=00:00:05.13 bitrate=5719.5kbits/s speed=10.2x
Last message repeated 153 times
[mp4 @ 0000018e11422ec0] pts has no valueB time=00:00:10.26 bitrate=4902.4kbits/s speed=10.2x
Last message repeated 150 times
[mp4 @ 0000018e11422ec0] pts has no valueB time=00:00:15.30 bitrate=4934.5kbits/s speed=10.2x
Last message repeated 137 times
[mp4 @ 0000018e11422ec0] pts has no valueB time=00:00:19.90 bitrate=4953.1kbits/s speed=9.91x
Last message repeated 152 times
[mp4 @ 0000018e11422ec0] pts has no valueB time=00:00:25.00 bitrate=4949.3kbits/s speed=9.97x
Last message repeated 152 times
[mp4 @ 0000018e11422ec0] pts has no valueB time=00:00:30.10 bitrate=4946.8kbits/s speed= 10x
Last message repeated 152 times
[mp4 @ 0000018e11422ec0] pts has no valueB time=00:00:35.20 bitrate=4945.0kbits/s speed= 10x
Last message repeated 146 times
[mp4 @ 0000018e11422ec0] pts has no valueB time=00:00:40.10 bitrate=4968.3kbits/s speed=9.99x
Last message repeated 153 times
[mp4 @ 0000018e11422ec0] pts has no valueB time=00:00:45.23 bitrate=4960.8kbits/s speed= 10x
Last message repeated 147 times
[mp4 @ 0000018e11422ec0] pts has no valueB time=00:00:50.16 bitrate=5016.4kbits/s speed= 10x
Last message repeated 152 times
[mp4 @ 0000018e11422ec0] pts has no valueB time=00:00:55.26 bitrate=4970.9kbits/s speed= 10x
Last message repeated 151 times
[mp4 @ 0000018e11422ec0] pts has no valueB time=00:01:00.33 bitrate=4970.6kbits/s speed= 10x
Last message repeated 153 times
[mp4 @ 0000018e11422ec0] pts has no valueB time=00:01:05.46 bitrate=4997.3kbits/s speed= 10x
Last message repeated 149 times
[mp4 @ 0000018e11422ec0] pts has no valueB time=00:01:10.46 bitrate=5029.6kbits/s speed= 10x
Last message repeated 153 times
[mp4 @ 0000018e11422ec0] pts has no valueB time=00:01:15.60 bitrate=5021.0kbits/s speed= 10x
Last message repeated 151 times
[mp4 @ 0000018e11422ec0] pts has no valueB time=00:01:20.66 bitrate=5017.6kbits/s speed= 10x
Last message repeated 154 times
[mp4 @ 0000018e11422ec0] pts has no valueB time=00:01:25.83 bitrate=4984.3kbits/s speed=10.1x
Last message repeated 151 times
[mp4 @ 0000018e11422ec0] pts has no valueB time=00:01:30.90 bitrate=4983.3kbits/s speed=10.1x
Last message repeated 149 times
[mp4 @ 0000018e11422ec0] pts has no valueB time=00:01:35.90 bitrate=4985.9kbits/s speed=10.1x
Last message repeated 149 times
[mp4 @ 0000018e11422ec0] pts has no valueB time=00:01:40.90 bitrate=4988.3kbits/s speed= 10x
Last message repeated 151 times
[mp4 @ 0000018e11422ec0] pts has no valueB time=00:01:45.96 bitrate=5007.0kbits/s speed=10.1x
Last message repeated 150 times
[mp4 @ 0000018e11422ec0] pts has no valueB time=00:01:51.00 bitrate=5006.7kbits/s speed=10.1x
Last message repeated 150 times
[mp4 @ 0000018e11422ec0] pts has no valueB time=00:01:56.03 bitrate=5006.4kbits/s speed=10.1x
Last message repeated 147 times
[mp4 @ 0000018e11422ec0] pts has no valueB time=00:02:00.96 bitrate=4992.9kbits/s speed= 10x
Last message repeated 151 times
[mp4 @ 0000018e11422ec0] pts has no valueB time=00:02:06.03 bitrate=4991.9kbits/s speed= 10x
Last message repeated 149 times
[mp4 @ 0000018e11422ec0] pts has no valueB time=00:02:11.03 bitrate=4993.5kbits/s speed= 10x
Last message repeated 151 times
[mp4 @ 0000018e11422ec0] pts has no valueB time=00:02:16.10 bitrate=4992.5kbits/s speed= 10x
Last message repeated 153 times
[mp4 @ 0000018e11422ec0] pts has no valueB time=00:02:21.23 bitrate=4989.2kbits/s speed=10.1x
Last message repeated 152 times
[mp4 @ 0000018e11422ec0] pts has no valueB time=00:02:26.33 bitrate=4987.3kbits/s speed=10.1x
Last message repeated 149 times
[mp4 @ 0000018e11422ec0] pts has no valueB time=00:02:31.33 bitrate=5016.5kbits/s speed= 10x
Last message repeated 153 times
[mp4 @ 0000018e11422ec0] pts has no valueB time=00:02:36.46 bitrate=5026.2kbits/s speed=10.1x
Last message repeated 148 times
[mp4 @ 0000018e11422ec0] pts has no valueB time=00:02:41.43 bitrate=4988.5kbits/s speed= 10x
Last message repeated 15 times
frame= 4862 fps=301 q=-1.0 Lsize= 98882kB time=00:02:41.96 bitrate=5001.3kbits/s speed= 10x
video:98859kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.023950%

lavfi: add erosion_opencl, dilation_opencl filters

$
0
0
lavfi: add erosion_opencl, dilation_opencl filters Add erosion_opencl, dilation_opencl filters. Behave like existing erosion and dilation filters.
  • [DH] configure
  • [DH] libavfilter/Makefile
  • [DH] libavfilter/allfilters.c
  • [DH] libavfilter/opencl/neighbor.cl
  • [DH] libavfilter/opencl_source.h
  • [DH] libavfilter/vf_neighbor_opencl.c

How to use an executable as a filter for Ffmpeg?

$
0
0

My goal is to get broadcasts from clients, then improve the quality of the image applying some filters, then rebroadcast to some other servers.

My first idea was to use OpenCV to make the changes to the broadcast but I realized that I would have problems to sincronize the changed frames with the audio. I do not know how could I change the frames and then recreate the broadcast with the audio.

Then I decided to go for Ffmpeg, using Ffmpeg filters, it seems that if I just use ffmpeg from end to end, the audio will just go togheter. The problem is that the filters I want to apply are complex, they are external programs. I would like to have an Ffmpeg filter that would be able to make changes to the frames by running my external program.

Viewing all 118125 articles
Browse latest View live