Quantcast
Channel: MediaSPIP
Viewing all 117901 articles
Browse latest View live

Using ffmpeg to record screen returns corrupted file

$
0
0

I'm using ffmpeg and python to record my desktop screen. I've got it working so that when I input a shortcut, it starts recording. Then I use .terminate() on the subprocess to stop recording. When outputting to an mp4, this corrupts the file and makes it unreadable. I can output the file as an flv or avi and it doesn't get corrupted, but then the video doesn't contain time/duration data, something I need.

Is there a way I can gracefully stop the recording when outputting an mp4? Or is there a way I can include the time/duration data in the flv/avi?

import keyboard
import os
from subprocess import Popen

class Main:
 def __init__(self):
 self.on = False

 def main(self):
 if not self.on:
 if os.path.isfile("output.mp4"):
 os.remove("output.mp4")

 self.process = Popen('ffmpeg -f gdigrab -framerate 30 -video_size 1920x1080 -i desktop -b:v 5M output.mp4')
 self.on = True
 else:
 self.process.terminate()

 self.on = False

run = Main()
keyboard.add_hotkey("ctrl+shift+g", lambda:run.main())

keyboard.wait()

Geting Video duration from terminal with FFMPEG in c++

$
0
0

I need to get video duration (just with command Line) with ffmpeg and use it in my code but it will just show it to me!

I want to use it to run some command Line too!

ffmpeg -i VideoName.mp4

and this is part of output:

Duration: 00:00:58.05, start: 0.000000, bitrate: 170 kb/s

How I can send info from Terminal to code??

Error while converting video to frames with ffmpeg

$
0
0

When I'm executing the conversion command in cmd, it fails to get the output. cmd output of the command

-- cmd output of the command

How to stream mp4 file with fluent-ffmpeg?

$
0
0

I am trying to stream a video file with fluent-ffmpeg. But i could't do it. Here is my code

var filePath = null;
filePath = "video.mp4";

var stat = fs.statSync(filePath);

var range = req.headers.range;
var parts = range.replace(/bytes=/, "").split("-");
var partialstart = parts[0];
var partialend = parts[1];

var start = parseInt(partialstart, 10);
var end = partialend ? parseInt(partialend, 10) : total-1;
var chunksize = (end-start)+1;

var file = fs.createReadStream(filePath, {start: start, end: end});

res.writeHead(206, {
 'Content-Range ': 'bytes ' + start + '-' + end + '/' + total,
 'Accept-Ranges' : 'bytes',
 'Content-Length' : chunksize,
 'Content-Type' : 'video/mp4'
});

ffmpeg(file)
.videoCodec('libx264')
.withAudioCodec('aac')
.format('mp4')
.videoFilters({
 filter: 'drawtext',
 options: {
 fontsize:20,
 fontfile: 'public/fonts/Roboto-Black.ttf',
 text: "USERNAME",
 x:10,
 y:10,
 fontcolor:"red"
 }})
 .outputOptions(['-frag_duration 100','-movflags frag_keyframe+faststart','-pix_fmt yuv420p'])
 .output(res,{ end:true })
 .on('error', function(err, stdout, stderr) {
 console.log('an error happened: ' + err.message + stdout + stderr);
 })
 .run();

When i run this code block, video not playing and throws an error:

an error happened: ffmpeg exited with code 1: pipe:0: Invalid data found when processing input

when i do not use stream as input, video is playing in Chrome but after a little time, video player throws error.

Is there any way that i can show text while playing video with ffmpeg or without it?

Use MinGW to compile x264 in windows, Error Code 5

$
0
0
rm -f libx264.a 
gcc-ar rc libx264.a common/osdep.o common/base.o common/cpu.o common/tables.o encoder/api.o common/win32thread.o common/mc-8.o common/predict-8.o common/pixel-8.o common/macroblock-8.o common/frame-8.o common/dct-8.o common/cabac-8.o common/common-8.o common/rectangle-8.o common/set-8.o common/quant-8.o common/deblock-8.o common/vlc-8.o common/mvpred-8.o common/bitstream-8.o encoder/analyse-8.o encoder/me-8.o encoder/ratecontrol-8.o encoder/set-8.o encoder/macroblock-8.o encoder/cabac-8.o encoder/cavlc-8.o encoder/encoder-8.o encoder/lookahead-8.o common/threadpool-8.o common/x86/mc-c-8.o common/x86/predict-c-8.o common/opencl-8.o encoder/slicetype-cl-8.o common/mc-10.o common/predict-10.o common/pixel-10.o common/macroblock-10.o common/frame-10.o common/dct-10.o common/cabac-10.o common/common-10.o common/rectangle-10.o common/set-10.o common/quant-10.o common/deblock-10.o common/vlc-10.o common/mvpred-10.o common/bitstream-10.o encoder/analyse-10.o encoder/me-10.o encoder/ratecontrol-10.o encoder/set-10.o encoder/macroblock-10.o encoder/cabac-10.o encoder/cavlc-10.o encoder/encoder-10.o encoder/lookahead-10.o common/threadpool-10.o common/x86/mc-c-10.o common/x86/predict-c-10.o common/x86/cpu-a.o common/x86/dct-32-8.o common/x86/pixel-32-8.o common/x86/bitstream-a-8.o common/x86/const-a-8.o common/x86/cabac-a-8.o common/x86/dct-a-8.o common/x86/deblock-a-8.o common/x86/mc-a-8.o common/x86/mc-a2-8.o common/x86/pixel-a-8.o common/x86/predict-a-8.o common/x86/quant-a-8.o common/x86/sad-a-8.o common/x86/dct-32-10.o common/x86/pixel-32-10.o common/x86/bitstream-a-10.o common/x86/const-a-10.o common/x86/cabac-a-10.o common/x86/dct-a-10.o common/x86/deblock-a-10.o common/x86/mc-a-10.o common/x86/mc-a2-10.o common/x86/pixel-a-10.o common/x86/predict-a-10.o common/x86/quant-a-10.o common/x86/sad16-a-10.o
make: *** [libx264.a] Error 5

The text above is the output infomation in the shell, and I don't know how to get more detailed information. What does "Error 5" means?

avcodec/cfhdenc: add gbrap12 pixel format support

$
0
0
avcodec/cfhdenc: add gbrap12 pixel format support
  • [DH] libavcodec/cfhdenc.c

avcodec/cfhdenc: free alpha buffer on closing

$
0
0
avcodec/cfhdenc: free alpha buffer on closing
  • [DH] libavcodec/cfhdenc.c

How to make old 4:3 video content into 16:9 with crop/zoom and slight stretch?

$
0
0

I have old video cam footage in 4:3 format that I'd like to have play better on modern 16:9 screens, specifically I'd like to:

  • Crop/"Zoom" the video "a little" (cutting a tiny bit, maybe 10-15%, of the top and bottom of the video off)
  • Stretch the video "a tiny bit" (maybe 10%) - this will of course can ruin the footage so would like to only stretch the video a tiny bit
  • Still keep a bit of a border on the sides (since I don't want to stretch or crop the video too much)

I'll still keep the original files too, but would like a version that just plays slightly nicer natively in 16:9. And I'd like to use free software like ffmpeg or Handbrake.

I've found guides on how to crop and how to stretch the videos independently but I'm fearing having to re-encode the videos twice loses quality and takes a lot of time, so I'd like to do it all in one go.

Does anyone have any ideas on how to do this?


process video stream from memory buffer

$
0
0

I need to parse a video stream (mpeg ts) from proprietary network protocol (which I already know how to do) and then I would like to use OpenCV to process the video stream into frames. I know how to use cv::VideoCapture from a file or from a standard URL, but I would like to setup OpenCV to read from a buffer(s) in memory where I can store the video stream data until it is needed. Is there a way to setup a call back method (or any other interfrace) so that I can still use the cv::VideoCapture object? Is there a better way to accomplish processing the video with out writing it out to a file and then re-reading it. I would also entertain using FFMPEG directly if that is a better choice. I think I can convert AVFrames to Mat if needed.

Stream specifier '' in filtergraph description [0][1]concat=a=1:n=1:v=1[s0] matches no streams

$
0
0

i am trying to concatenate two audio files in django with ffmpeg but getting this error Stream specifier '' in filtergraph description [0][1]concat=a=1:n=1:v=1[s0] matches no streams.`

here is my function

def audiomarge(request):
 recorded_audio = request.FILES['audio']
 new = tempSong(tempSongFile=recorded_audio)
 new.tempSongFile.name = 'test.wav'
 new.save()
 record_file_path = new.tempSongFile.path
 record_file_path = str(record_file_path)
 recorded_audio = request.POST.get('audio')
 songslug = request.POST.get('songslug')
 current_song = Song.objects.filter(slug=songslug)[0]
 current_song_path = current_song.songFile.url 
 current_song_path = '.'+(str(current_song_path))
 
 input_first = ffmpeg.input(current_song_path)
 input_second = ffmpeg.input(record_file_path)


 ffmpeg.concat(input_first, input_second, v=1, a=1).output('./finished_video.wav').run()
 return HttpResponse('okay')

i have also tried .compile() instead of .run() in this case nothing is happening

No accelerated colorspace conversion found from yuv420p to argb

$
0
0

I am a novice at ffmpeg and have recently taken over a code base built by a previous engineer. The FFmpeg code is on an app engine that will edit the videos when they are uploaded.

This code generated a title animation that will later be used as an overlay.

exports.generateTitleAnimation = function(metadata, destPath, options = {}) {
const peeqLogoPath = "/app/assets/peeq-logo.png";
const whiteBarMovPath = "/app/assets/whiteBar.mov";
const titleFontPath = "/app/assets/Sofia-Pro-Black.otf";
const dateStrFontPath = "/app/assets/Sofia-Pro-Medium.otf";
const outputDuration = 5.52;
const src01 = "color=c=white:s=1920x1080:duration="+ outputDuration;
const src02 = "color=c=white@0.0:s=1920x1080:r=120:duration="+ outputDuration;

var dateStrXOffset = "(92";
var filterComplexStr = "[1]";

if (metadata.title) {
 const title = metadata.title.toUpperCase();
 filterComplexStr += "drawtext=fontfile="+ titleFontPath + ":text='"+ title + "':x='floor(92*(min((t-1.75)^29,0)+max((t-3.75)^29,0)+1))':y=622+30+2:fontsize=70:fontcolor=black:ft_load_flags=render,";
}
if (metadata.subTitle) {
 const subTitle = metadata.subTitle.toUpperCase();
 filterComplexStr += "drawtext=fontfile="+ titleFontPath + ":text='"+ subTitle + "':x='floor(92*(min((t-2.0)^29,0.0)+max((t-3.8)^29,0.0)+1.0))':y=622+184-20-60+9:fontsize=46:fontcolor=black:ft_load_flags=render,";

 dateStrXOffset += "+30*"+ (subTitle.length + 1);
}
if (metadata.dateStr) {
 filterComplexStr += "drawtext=fontfile="+ dateStrFontPath + ":text='"+ metadata.dateStr + "':x='floor("+ dateStrXOffset + ")*(min((t-2.0)^29,0.0)+max((t-3.8)^29,0.0)+1.0))':y=622+184-20-60+9:fontsize=46:fontcolor=black:ft_load_flags=render,";
}
console.log("generateTitleAnimation generating")
filterComplexStr += "split=10[t01][t02][t03][t04][t05][t06][t07][t08][t09][t10];[t02]setpts=PTS+0.0166/TB[d02];[t03]setpts=PTS+0.033/TB[d03];[t04]setpts=PTS+0.05/TB[d04];[t05]setpts=PTS+0.0666/TB[d05];[t06]setpts=PTS+0.083/TB[d06];[t07]setpts=PTS+0.1/TB[d07];[t08]setpts=PTS+0.1166/TB[d08];[t09]setpts=PTS+0.133/TB[d09];[t10]setpts=PTS+0.15/TB[d10];[d10][d09]blend=average,[d08]blend=darken,[d07]blend=average,[d06]blend=darken,[d05]blend=average,[d04]blend=darken,[d03]blend= average,[d02]blend=darken,[t01]blend=average,colorkey=white:0.2:0.0,perspective=y1=W*0.176327:y3=H+W*0.176327[text01];[2][3]overlay=x=(W-w)*0.5:y=(H-h)*0.5:enable='between(t,0,3.0)'[logo01];[logo01][text01]overlay[outv]";

var args = ["-y", "-f", "lavfi", "-i", src01, "-f", "lavfi", "-i", src02, "-i", whiteBarMovPath, "-i", peeqLogoPath, "-filter_complex", filterComplexStr, "-vcodec", "qtrle", "-crf:v", "28", "-codec:a", "aac", "-ac", "2", "-ar", "44100", "-ab", "128k", "-map", "[outv]", destPath];

//console.log("args", args);
return childProcess.spawn('ffmpeg', args).then((ffResult) => {
 return destPath;
}, (err) => {
 //console.error(new Error("generateTitleAnimation:"+ err));
 console.error(err);
 return Promise.reject(err);
});};

destPath is a .mov file

Up till a few days ago, the backend started throwing up this error

stderr: 'ffmpeg version 3.4.2-1~16.04.york0.2 Copyright (c) 2000-2018
 the FFmpeg developers\n built with gcc 5.4.0 (Ubuntu 5.4.0-
6ubuntu1~16.04.9) 20160609\n configuration: --prefix=/usr --extra-
version=\'1~16.04.york0.2\' --toolchain=hardened --
libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --
enable-gpl --disable-stripping --enable-avresample --enable-avisynth --
enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --
enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --
enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-
libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-
libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --
enable-librubberband --enable-librsvg --enable-libshine --enable-
libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-
libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --
enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 -
-enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-
openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --
enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-
libopencv --enable-libx264 --enable-shared\n libavutil 55. 78.100 / 55.
 78.100\n libavcodec 57.107.100 / 57.107.100\n libavformat 57. 83.100 /
 57. 83.100\n libavdevice 57. 10.100 / 57. 10.100\n libavfilter 
6.107.100 / 6.107.100\n libavresample 3. 7. 0 / 3. 7. 0\n libswscale 4.
 8.100 / 4. 8.100\n libswresample 2. 9.100 / 2. 9.100\n libpostproc 54.
 7.100 / 54. 7.100\nInput #0, lavfi, from 
\'color=c=white:s=1920x1080:duration=5.52\':\n Duration: N/A, start: 
0.000000, bitrate: N/A\n Stream #0:0: Video: rawvideo (I420 / 
0x30323449), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], 25 tbr, 25 tbn, 25
 tbc\nInput #1, lavfi, from 
\'color=c=white@0.0:s=1920x1080:r=120:duration=5.52\':\n Duration: N/A,
 start: 0.000000, bitrate: N/A\n Stream #1:0: Video: rawvideo (I420 /
 0x30323449), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], 120 fps, 120 tbr,
 120 tbn, 120 tbc\nInput #2, mov,mp4,m4a,3gp,3g2,mj2, from 
\'/app/assets/whiteBar.mov\':\n Metadata:\n major_brand : qt \n 
minor_version : 537199360\n compatible_brands: qt \n creation_time : 
2018-04-27T15:55:18.000000Z\n Duration: 00:00:05.52, start: 0.000000, 
bitrate: 54847 kb/s\n Stream #2:0(eng): Video: qtrle (rle / 
0x20656C72), bgra(progressive), 1920x1080, 53326 kb/s, SAR 1:1 DAR 16:9, 60 
fps, 60 tbr, 60 tbn, 60 tbc (default)\n Metadata:\n creation_time : 
2018-04-27T15:55:18.000000Z\n handler_name : Apple Alias Data Handler\n
 encoder : Animation\n timecode : 00:00:00:00\n Stream #2:1(eng): Data:
 none (tmcd / 0x64636D74), 0 kb/s (default)\n Metadata:\n creation_time
 : 2018-04-27T15:55:18.000000Z\n handler_name : Apple Alias Data
 Handler\n timecode : 00:00:00:00\nInput #3, png_pipe, from 
\'/app/assets/peeq-logo.png\':\n Duration: N/A, bitrate: N/A\n Stream 
#3:0: Video: png, rgba(pc), 452x207 [SAR 2834:2834 DAR 452:207], 25 
tbr, 25 tbn, 25 tbc\nCodec AVOption crf (Select the quality for 
constant quality mode) specified for output file #0 (/tmp/972967.mov) 
has not been used for any stream. The most likely reason is either 
wrong type (e.g. a video option with no video streams) or that it is a 
private option of some encoder which was not actually used for any 
stream.\nCodec AVOption b (set bitrate (in bits/s)) specified for 
output file #0 (/tmp/972967.mov) has not been used for any stream. The 
most likely reason is either wrong type (e.g. a video option with no 
video streams) or that it is a private option of some encoder which was 
not actually used for any stream.\nStream mapping:\n Stream #1:0 
(rawvideo) -> drawtext\n Stream #2:0 (qtrle) -> overlay:main\n Stream 
#3:0 (png) -> overlay:overlay\n overlay -> Stream #0:0 (qtrle)\nPress 
[q] to stop, [?] for help\n[swscaler @ 0x56080b828180] No accelerated 
colorspace conversion found from yuv420p to argb.\n[swscaler @ 
0x56080b8b5f40] No accelerated colorspace conversion found from yuva420p to argb.\n',

However, this error only occurs on the app engine. Running nom test on my Mac generates the title perfectly.

Python and ffmpeg audio sync and screen recording issues

$
0
0

I'm using ffmpeg and python to record my desktop screen. When the program is run, it starts recording, then when I press a key-combo it cuts off the last x amount of seconds and saves it then starts recording again; similar to the "record that" functionality of windows game bar.

I have it working so it records video just fine, but then I change the ffmpeg command to record audio from my desktop and I get an error saying ValueError: could not convert string to float: 'N/A' occurring when I try to calculate the length of the recorded video. It appears as though the recording isn't being stopped until after I try to calculate the video length, even though this exact same code works fine when not recording audio.

Additionally, I also have an issue when recording audio in that the audio is a couple hundred milliseconds in front of the video. It's not a lot but it's enough to be noticeable.

What I'm overall asking, is there a way I can modify the ffmpeg command to prevent the audio desync issues, and what might be causing the problems I'm getting when attempting to find the length of the video with audio?

import keyboard, signal
from os import remove
from os.path import isfile
from subprocess import Popen, getoutput
from datetime import datetime
import configparser

class Main:
 def __init__(self, save_location, framerate, duration):
 self.save_location = save_location
 self.framerate = int(framerate)
 self.duration = int(duration)
 self.working = self.save_location + '\\' + 'working.avi'
 self.start_recording()

 def start_recording(self):
 if isfile(self.working):
 remove(self.working)

 # start recording to working file at set framerate
 self.process = Popen(f'ffmpeg -thread_queue_size 578 -f gdigrab -video_size 1920x1080 -i desktop -f dshow -i audio="Stereo Mix (Realtek High Definition Audio)" -b:v 7M -minrate 4M -framerate {self.framerate} {self.working}')
 #self.process = Popen(f'ffmpeg -f gdigrab -framerate {self.framerate} -video_size 1920x1080 -i desktop -b:v 7M -minrate 2M {self.working}')

 def trim_video(self):
 # stop recording working file
 self.process.send_signal(signal.CTRL_C_EVENT)

 # call 'cause I have to
 getoutput(f"ffprobe -i {self.working}")

 # get length of working video
 length = getoutput(f'ffprobe -i {self.working} -show_entries format=duration -v quiet -of csv="p=0"')

 # get time before desired recording time
 start = float(length) - self.duration

 # get save location and title
 title = self.save_location+'\\'+self.get_time()+'.avi'

 # cut to last amount of desired time
 Popen(f"ffmpeg -ss {start} -i {self.working} -c copy -t {self.duration} {title}")
 getoutput(f"ffprobe -i {self.working}")

 self.start_recording()

 def get_time(self):
 now = datetime.now()
 return now.strftime("%Y_%m_%d#%H-%M-%S")


if __name__ == "__main__":
 config = configparser.ConfigParser()
 config.read("settings.ini")
 config = config["DEFAULT"]

 run = Main(config["savelocation"].replace("\\", "\\\\"), config["framerate"], config["recordlast"])
 keyboard.add_hotkey("ctrl+shift+alt+g", lambda:run.trim_video())

 while True:
 try:
 keyboard.wait()
 except KeyboardInterrupt:
 pass

The contents of the settings.ini file are listed below

[DEFAULT]
savelocation = C:\
framerate = 30
recordlast = 10

In the code block, the first line with self.process = Popen is the one that records audio and has the issues, the second line (the commented out one below) is the one that works fine.

ffmpeg-python trim why not concat

$
0
0

I want to split the video, do some logical processing, and finally merge it

import ffmpeg

info = ffmpeg.probe("test.mp4")
vs = next(c for c in info['streams'] if c['codec_type'] == 'video')
num_frames = vs['nb_frames']
arr = []
in_file = ffmpeg.input('test.mp4')

for i in range(int(int(num_frames) / 30) + 1):
 startTime = i * 30 + 1
 endTime = (1 + i) * 30
 if endTime >= int(num_frames):
 endTime = int(num_frames)
 # more more
 arr.append(in_file.trim(start_frame=startTime, end_frame=endTime))

(
 ffmpeg
 .concat(arr)
 .output('out.mp4')
 .run()
)

I don't understand why this is happening

TypeError: Expected incoming stream(s) to be of one of the following types: ffmpeg.nodes.FilterableStream; got 

OSError: [Errno 9] Bad file descriptor discord bot

$
0
0

I am working on a discord bot using the discord.py API. I am trying to make the bot join a voice channel and play an mp3 file that is in the same directory. It joins the voice channel but prints the error in the title before playing anything. I was wondering what was causing this error. Relevant code below:

@client.command()
async def playClip(ctx):
 global voice
 channel=ctx.message.author.voice.channel
 voice= get(client.voice_clients, guild=ctx.guild)

 if voice and voice.is_connected():
 await voice.move_to(channel)
 else:
 voice= await channel.connect()

 voice.play(discord.FFmpegPCMAudio("BravoSix.mp3"))
 print("test")
 await voice.disconnect()

FFMPEG, faulty input with only a few keyframes?

$
0
0

I have this video I can watch on MPC-HC alright, but I've been trying to re-encode it with FFMPEG to no avail.

The video unerringly stops 5 minutes into the clip and freezes. Checking the key frames it would seem like the last keyframe is indeed at around the 5 minute mark, even though the video is completely watchable for its entire duration of upwards an hour. I've been looking into finding a way to repopulate the index of keyframes or something, but the answer eludes me.

Both Handbrake and Premiere Pro fails to handle this video properly as well. Premiere Pro just imports it as though it's 5 minutes long, and Handbrake freezes when the encoding reaches the 5 minute mark.

Even doing a -c copy would give me an output that terminates at the 5 minute mark.

What can I do to fix this?

Edit: Added log as requested.

ffmpeg version git-2020-02-27-9b22254 Copyright (c) 2000-2020 the FFmpeg developers
 built with gcc 9.2.1 (GCC) 20200122
 configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-ffnvcodec --enable-cuda-llvm --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt --enable-amf
 libavutil 56. 42.100 / 56. 42.100
 libavcodec 58. 73.102 / 58. 73.102
 libavformat 58. 39.101 / 58. 39.101
 libavdevice 58. 9.103 / 58. 9.103
 libavfilter 7. 77.100 / 7. 77.100
 libswscale 5. 6.100 / 5. 6.100
 libswresample 3. 6.100 / 3. 6.100
 libpostproc 55. 6.100 / 55. 6.100
[mov,mp4,m4a,3gp,3g2,mj2 @ 0000019d5f659b00] st: 0 edit list: 1 Missing key frame while searching for timestamp: 20
[mov,mp4,m4a,3gp,3g2,mj2 @ 0000019d5f659b00] st: 0 edit list 1 Cannot find an index entry before timestamp: 20.
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '[REDACTED].mp4':
 Metadata:
 major_brand : isom
 minor_version : 1
 compatible_brands: isom
 creation_time : 2009-07-27T12:06:40.000000Z
 Duration: 01:13:58.42, start: 0.000000, bitrate: 2671 kb/s
 Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 716x480 [SAR 8:9 DAR 179:135], 2504 kb/s, SAR 29127:32768 DAR 1007794:760071, 29.97 fps, 29.97 tbr, 48k tbn, 59.94 tbc (default)
 Metadata:
 rotate : 0
 creation_time : 2007-09-08T18:58:09.000000Z
 encoder : AVC Coding
 Side data:
 displaymatrix: rotation of -0.00 degrees
 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 159 kb/s (default)
 Metadata:
 creation_time : 2009-07-27T12:06:47.000000Z
 handler_name : GPAC ISO Audio Handler
Stream mapping:
 Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
 Stream #0:1 -> #0:1 (aac (native) -> aac (native))
Press [q] to stop, [?] for help
[libx264 @ 0000019d5f65de40] using SAR=8/9
[libx264 @ 0000019d5f65de40] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0000019d5f65de40] profile High 4:4:4 Predictive, level 3.0, 4:2:0, 8-bit
[libx264 @ 0000019d5f65de40] 64 - core 159 - H.264/MPEG-4 AVC codec - Copyleft 2003-2019 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=0 mixed_ref=1 me_range=16 chroma_me=1 trellis=0 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=0 chroma_qp_offset=0 threads=9 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc=cqp mbtree=0 qp=0
Output #0, mp4, to '[REDACTED].mp4':
 Metadata:
 major_brand : isom
 minor_version : 1
 compatible_brands: isom
 encoder : Lavf58.39.101
 Stream #0:0(und): Video: h264 (libx264) (avc1 / 0x31637661), yuv420p(progressive), 716x480 [SAR 29127:32768 DAR 1007794:760071], q=-1--1, 29.97 fps, 11988 tbn, 29.97 tbc (default)
 Metadata:
 encoder : Lavc58.73.102 libx264
 creation_time : 2007-09-08T18:58:09.000000Z
 Side data:
 cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
 displaymatrix: rotation of -0.00 degrees
 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 192 kb/s (default)
 Metadata:
 creation_time : 2009-07-27T12:06:47.000000Z
 handler_name : GPAC ISO Audio Handler
 encoder : Lavc58.73.102 aac
frame= 9958 fps= 70 q=-1.0 Lsize= 960472kB time=00:11:13.32 bitrate=11685.6kbits/s speed=4.71x
video:944292kB audio:15836kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.035744%
[libx264 @ 0000015f858dde40] frame I:46 Avg QP: 0.00 size:113701
[libx264 @ 0000015f858dde40] frame P:9912 Avg QP: 0.00 size: 97026
[libx264 @ 0000015f858dde40] mb I I16..4: 52.6% 10.7% 36.7%
[libx264 @ 0000015f858dde40] mb P I16..4: 22.5% 5.5% 10.5% P16..4: 31.8% 15.2% 11.1% 0.0% 0.0% skip: 3.4%
[libx264 @ 0000015f858dde40] 8x8 transform intra:14.3% inter:40.9%
[libx264 @ 0000015f858dde40] coded y,uvDC,uvAC intra: 95.6% 84.9% 84.0% inter: 81.2% 83.0% 82.5%
[libx264 @ 0000015f858dde40] i16 v,h,dc,p: 53% 44% 2% 1%
[libx264 @ 0000015f858dde40] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 40% 47% 8% 1% 1% 1% 1% 1% 1%
[libx264 @ 0000015f858dde40] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 33% 44% 5% 3% 5% 4% 2% 3% 1%
[libx264 @ 0000015f858dde40] i8c dc,h,v,p: 12% 44% 43% 1%
[libx264 @ 0000015f858dde40] Weighted P-Frames: Y:1.0% UV:0.2%
[libx264 @ 0000015f858dde40] ref P L0: 59.1% 10.2% 27.8% 2.9% 0.0%
[libx264 @ 0000015f858dde40] kb/s:23281.49
[aac @ 0000015f85b5b880] Qavg: 182.070

Recording Audio and Video simultaneously to separate audio and video outputs [closed]

$
0
0

I've tried moving around the -t duration arg with no avail

ffmpeg -y \
 -f alsa -ar 48000 -t 3 -i hw:2 out.wav \
 -f v4l2 -framerate 10 -video_size 1280x720 -t 3 -i /dev/video0 out.mkv \
 -map 0 out.wav -map 1 out.mkv
ffmpeg -y \
 -f alsa -ar 48000 -i hw:2 out.wav \
 -f v4l2 -framerate 10 -video_size 1280x720 -i /dev/video0 out.mkv \
 -map 0 -t 6 out.wav -t 6 -map 1 out.mkv

How to add transparent watermark in center of a video with ffmpeg?

$
0
0

I am currently using these commands:

Top left corner
ffmpeg –i inputvideo.avi -vf "movie=watermarklogo.png [watermark]; [in][watermark] overlay=10:10 [out]" outputvideo.flv

Top right corner
ffmpeg –i inputvideo.avi -vf "movie=watermarklogo.png [watermark]; [in][watermark] overlay=main_w-overlay_w-10:10 [out]" outputvideo.flv

Bottom left corner
ffmpeg –i inputvideo.avi -vf "movie=watermarklogo.png [watermark]; [in][watermark] overlay=10:main_h-overlay_h-10 [out]" outputvideo.flv

Bottom right corner
ffmpeg –i inputvideo.avi -vf "movie=watermarklogo.png [watermark]; [in][watermark] overlay=(main_w-overlay_w-10)/2:(main_h-overlay_h-10)/2 [out]" outputvideo.flv

How to place watermark center of the video ?

avcodec/cfhd: check if band encoding is valid

$
0
0
avcodec/cfhd: check if band encoding is valid Also simplify lossless check as value of 5 for band encoding
always specify lossless mode.
  • [DH] libavcodec/cfhd.c

avcodec/cfhd: reindent

$
0
0
avcodec/cfhd: reindent
  • [DH] libavcodec/cfhd.c

avcodec/cfhd: use init_get_bits8()

$
0
0
avcodec/cfhd: use init_get_bits8()
  • [DH] libavcodec/cfhd.c
Viewing all 117901 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>