Quantcast
Channel: MediaSPIP
Viewing all 117737 articles
Browse latest View live

ffmpeg watermark

$
0
0

I am using a static compiled lib of FFMPEG gotten from BAMBUSER for android. The problem I am facing now is that FFMPEG version gotten from BAMBUSER does not support watermarking.

ffmpeg -sameq -i mirror_watermark.mp4 -vf "movie=mirror_watermark.png [logo]; [in][logo] overlay=main_w-overlay_w:main_h-overlay_h [out]" output.mp4
No such filter: 'movie' ./configure --list-filters | grep movie
returns nothing

So I guess I have to use a newer version of FFMPEG but I do not know how to get started since with the BAMBUSER everything was already set I just added certain encoders and decoders to their script.


How to crossfade 3 videos using ffmpeg?

$
0
0

I am currently using below command that is working fine for 2 videos but how to crossfade for 3 videos:

ffmpeg -report -i "video2.mp4" -i "video3.mp4" -an \
-filter_complex \
"[0:v]trim=start=0:end=9,setpts=PTS-STARTPTS[firstclip]; \
[1:v]trim=start=1,setpts=PTS-STARTPTS[secondclip]; \
[0:v]trim=start=9:end=10,setpts=PTS-STARTPTS[fadeoutsrc]; \
[1:v]trim=start=0:end=1,setpts=PTS-STARTPTS[fadeinsrc]; \
[fadeinsrc]format=pix_fmts=yuva420p,fade=t=in:st=0:d=1:alpha=1[fadein]; \
[fadeoutsrc]format=pix_fmts=yuva420p,fade=t=out:st=0:d=1:alpha=1[fadeout]; \
[fadein]fifo[fadeinfifo]; \
[fadeout]fifo[fadeoutfifo]; \
[fadeoutfifo][fadeinfifo]overlay[crossfade]; \
[firstclip][crossfade][secondclip]concat=n=3" \ outputname.mp4

Thanks!

Error while building PJSIP with FFMPEG storage size of 'dstFormat' isn't known

$
0
0

Building PJSIP for armeabi with FFMPEG gives me following output:

../src/pjmedia/converter_libswscale.c: In function 'factory_create_converter':
../src/pjmedia/converter_libswscale.c:70:24: error: storage size of 'srcFormat' isn't known enum AVPixelFormat srcFormat, dstFormat; ^

I've read many articles about problems with PixelFormat -> AVPixelFormat in newer versions of FFMPEG(Such as one i've built for PJSIP) - I got all of my pj sources updated according with AVPixelFormat.

Building environment:

Ubuntu 16.04 LTS 64bit running in VirtualBox

PJSIP 2.6

FFMPEG 3.0.9

Compiling PJSIP with flags:

#define PJMEDIA_HAS_VIDEO 1

#define PJMEDIA_HAS_FFMPEG 1

NDK-flag:

--with-ffmpeg="${BASE_DIR}/ffmpeg-output"

ffmpeg-output folder contains these files:

  • libavcodec.so
  • libavcodec.so.57
  • libavcodec.so.57.24.102
  • libavdevice.so
  • libavdevice.so.57
  • libavdevice.so.57.0.101
  • libavfilter.so
  • libavfilter.so.6
  • libavfilter.so.6.31.100
  • libavformat.so
  • libavformat.so.57
  • libavformat.so.57.25.100
  • libavutil.so
  • libavutil.so.55
  • libavutil.so.55.17.103
  • libpostproc.so
  • libpostproc.so.54
  • libpostproc.so.54.0.100
  • libswresample.so
  • libswresample.so.2
  • libswresample.so.2.0.101
  • libswscale.so
  • libswscale.so.4
  • libswscale.so.4.0.100

Trying to build different versions of FFMPEG didn't work at all(Got the same error at the end)

Any help will be appreciated.

UPD: Is there any strange stuff aboud AVPixelFormat in this part of compiler output related to ffmpeg:

checking ffmpeg packages... libavdevice libavformat libavcodec libswscale libavutil
checking for enum AVPixelFormat... no
checking for v4l2_open in -lv4l2... no
Checking if OpenH264 is disabled... yes
Skipping Intel IPP settings (not wanted)

I mean the line says "checking for enum AVPixelFormat... no" - is there any flag I need to write to say to PJSIP "you have to work with AVPixelFormat"?

lavfi/nlmeans: use AV_CEIL_RSHIFT instead of deprecated FF_CEIL_RSHIFT

$
0
0
lavfi/nlmeans: use AV_CEIL_RSHIFT instead of deprecated FF_CEIL_RSHIFT
  • [DH] libavfilter/vf_nlmeans.c

lavfi/swaprect: use AV_CEIL_RSHIFT instead of deprecated FF_CEIL_RSHIFT

$
0
0
lavfi/swaprect: use AV_CEIL_RSHIFT instead of deprecated FF_CEIL_RSHIFT
  • [DH] libavfilter/vf_swaprect.c

lavc/cfhd: use AV_CEIL_RSHIFT instead of deprecated FF_CEIL_RSHIFT

$
0
0
lavc/cfhd: use AV_CEIL_RSHIFT instead of deprecated FF_CEIL_RSHIFT
  • [DH] libavcodec/cfhd.c

Error during decoding (-1094995529) Invalid data found when processing input

$
0
0

I wants Hardware Acceleration using dxva2. In my code Software decoding works fine but with hardware acceleration avcodec_send_packet return negative value(-1094995529). Help me to fix this ?

Initialization of dxva2 hardware :

enum AVHWDeviceType type = av_hwdevice_find_type_by_name("dxva2"); if (type == AV_HWDEVICE_TYPE_NONE) {
fprintf(fp, "Hardware Device type is not supported.\n");
return -1;
} for (int i = 0;; i++) { const AVCodecHWConfig *config = avcodec_get_hw_config(sChannelInfo[chNum].codec, i); if (!config) { fprintf(fp, "Failed to get Hardware Configuartion\n"); return -1; } if (config->methods & AV_CODEC_HW_CONFIG_METHOD_HW_DEVICE_CTX && config->device_type == type) { hw_pix_fmt = config->pix_fmt; fprintf(fp, "config->pix_fmt %d\n", (int)hw_pix_fmt); break; } } sChannelInfo[chNum].pCodecContext->pix_fmt = AV_PIX_FMT_DXVA2_VLD; sChannelInfo[chNum].pCodecContext->get_format = get_hw_format; av_opt_set_int(sChannelInfo[chNum].pCodecContext, "refcounted_frames", 1, 0); if (hw_decoder_init(sChannelInfo[chNum].pCodecContext, type) < 0) { fprintf(fp, "hardware decoder initilisation Failed\n"); return -1; }

Decode:

int ret = avcodec_send_packet(sChannelInfo[chNum].pCodecContext, &pkt);
if (ret < 0) { char buffer[64]; av_make_error_string(buffer, 64, ret); fprintf(fp, "Error during decoding %d %s\n",ret,buffer); return ret; }

Why avcodec_send_packet API return -1094995529 value? Is there anything missing in dxva2 initialization?

Play video with FFmpeg in Mac OS

$
0
0

Need to play video with FFmpeg programmatically in Xcode. We have command line utility, but don't have controls and components to play (like AVPlayer in AVFoundation).

Here code in swift 4:

 let process = Process() process.launchPath = Bundle.main.path(forResource: "ffmpeg", ofType: "") process.arguments = ["-i", Bundle.main.path(forResource: "mov", ofType: "mov")!, "-f", "opengl"] let pipe = Pipe() process.standardOutput = pipe process.launch() process.waitUntilExit() let data = pipe.fileHandleForReading.readDataToEndOfFile() let output = NSString(data: data, encoding: String.Encoding.utf8.rawValue) 

Undefined symbols av_register_all()

$
0
0

Good day,

I am beginner in Objective-C and Xcode IDE. I am trying use ffmpeg in my iOS application. I cloned https://github.com/kewlbear/FFmpeg-iOS-build-script and build for arm64 and x86_64.

When I wanted to build app it crashed with

Ld /Users/nikolajpognerebko/Library/Developer/Xcode/DerivedData/CPP3-eowdhpsbeagmxydsrsscofhtuwtl/Build/Products/Debug-iphonesimulator/CPP3.app/CPP3 normal x86_64
cd /Volumes/sedy/xcode/CPP3
export IPHONEOS_DEPLOYMENT_TARGET=9.1
export PATH="/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin:/Applications/Xcode.app/Contents/Developer/usr/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin"
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++ -arch x86_64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator9.1.sdk -L/Users/nikolajpognerebko/Library/Developer/Xcode/DerivedData/CPP3-eowdhpsbeagmxydsrsscofhtuwtl/Build/Products/Debug-iphonesimulator -L/Volumes/sedy/xcode/CPP3/CPP3/ffmpeg/lib -F/Users/nikolajpognerebko/Library/Developer/Xcode/DerivedData/CPP3-eowdhpsbeagmxydsrsscofhtuwtl/Build/Products/Debug-iphonesimulator -filelist /Users/nikolajpognerebko/Library/Developer/Xcode/DerivedData/CPP3-eowdhpsbeagmxydsrsscofhtuwtl/Build/Intermediates/CPP3.build/Debug-iphonesimulator/CPP3.build/Objects-normal/x86_64/CPP3.LinkFileList -Xlinker -rpath -Xlinker @executable_path/Frameworks -mios-simulator-version-min=9.1 -Xlinker -objc_abi_version -Xlinker 2 -stdlib=libc++ -fobjc-arc -fobjc-link-runtime -lavcodec -lavdevice -lavfilter -lavformat -lavutil -lswresample -lswscale -framework AVFoundation -liconv -lbz2 -Xlinker -dependency_info -Xlinker /Users/nikolajpognerebko/Library/Developer/Xcode/DerivedData/CPP3-eowdhpsbeagmxydsrsscofhtuwtl/Build/Intermediates/CPP3.build/Debug-iphonesimulator/CPP3.build/Objects-normal/x86_64/CPP3_dependency_info.dat -o /Users/nikolajpognerebko/Library/Developer/Xcode/DerivedData/CPP3-eowdhpsbeagmxydsrsscofhtuwtl/Build/Products/Debug-iphonesimulator/CPP3.app/CPP3 Undefined symbols for architecture x86_64: "av_register_all()", referenced from: Decoder::Decoder() in ViewController.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)

There is zipped project on OneDrive http://1drv.ms/1KkPAia because it is a best way to explain my problem.

Please help me and explain, what this problem arose.

Thanks very much.

-- https://github.com/kewlbear/FFmpeg-iOS-build-script, http://1drv.ms/1KkPAia

FFMPEG: RTSP re-stream dies randomly

$
0
0

I have a security camera streaming RTSP, and I wish to re-stream this to an RTMP ingest server. For now I'm using my laptop as an ffmpeg proxy, but eventually I'll use a raspberry pi or something similar (cheap/small)

Here's the command I'm using (pretty simple):

ffmpeg -i rtsp://@10.0.0.16:554/1/h264major -c:v libx264 -c:a none -f flv rtmp://output/camera_stream

This works but after a minute or two the stream dies. Here's the output:

ffmpeg version N-90057-g7c82e0f Copyright (c) 2000-2018 the FFmpeg developers built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.6) 20160609 configuration: --prefix=/home/sbarnett/ffmpeg_build --pkg-config-flags=--static --extra-cflags=-I/home/sbarnett/ffmpeg_build/include --extra-ldflags=-L/home/sbarnett/ffmpeg_build/lib --extra-libs='-lpthread -lm' --bindir=/home/sbarnett/bin --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libspeex --enable-nonfree libavutil 56. 7.101 / 56. 7.101 libavcodec 58. 11.101 / 58. 11.101 libavformat 58. 9.100 / 58. 9.100 libavdevice 58. 1.100 / 58. 1.100 libavfilter 7. 12.100 / 7. 12.100 libswscale 5. 0.101 / 5. 0.101 libswresample 3. 0.101 / 3. 0.101 libpostproc 55. 0.100 / 55. 0.100
Input #0, rtsp, from 'rtsp://@10.0.0.16:554/1/h264major': Metadata: title : h264major comment : h264major Duration: N/A, start: 0.360000, bitrate: N/A Stream #0:0: Video: h264 (Main), yuvj420p(pc, bt709, progressive), 720x480, 25 fps, 25 tbr, 90k tbn, 50 tbc
Stream mapping: Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[libx264 @ 0x38843c0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x38843c0] profile High, level 3.0
[libx264 @ 0x38843c0] 264 - core 155 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, flv, to 'rtmp://output/camera_stream': Metadata: title : h264major comment : h264major encoder : Lavf58.9.100 Stream #0:0: Video: h264 (libx264) ([7][0][0][0] / 0x0007), yuvj420p(pc), 720x480, q=-1--1, 25 fps, 1k tbn, 25 tbc Metadata: encoder : Lavc58.11.101 libx264 Side data: cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
Past duration 0.999992 too large Last message repeated 29 times
[rtsp @ 0x3847600] max delay reached. need to consume packet
[rtsp @ 0x3847600] RTP: missed 48 packets
Past duration 0.999992 too large Last message repeated 4 times
frame= 44 fps=0.0 q=0.0 size= 0kB time=00:00:00.00 bitrate=N/A dup=0 drop=5 speed= 0x frame= 57 fps= 54 q=28.0 size= 43kB time=00:00:00.16 bitrate=2186.4kbits/s dup=0 drop=5 speed=0.153x ... (lots of similar messages) ... frame= 1163 fps= 26 q=28.0 size= 1341kB time=00:00:44.84 bitrate= 245.0kbits/s dup=0 drop=5 speed=0.99x frame= 1177 fps= 26 q=28.0 size= 1353kB time=00:00:45.40 bitrate= 244.2kbits/s dup=0 drop=5 speed=0.99x [rtsp @ 0x3847600] max delay reached. need to consume packet
[rtsp @ 0x3847600] RTP: missed 2 packets
frame= 1190 fps= 26 q=28.0 size= 1370kB time=00:00:45.92 bitrate= 244.4kbits/s dup=0 drop=5 speed=0.99x [h264 @ 0x38c08c0] Increasing reorder buffer to 1
frame= 1201 fps= 26 q=28.0 size= 1381kB time=00:00:46.36 bitrate= 244.0kbits/s dup=0 drop=5 speed=0.989x frame= 1214 fps= 26 q=28.0 size= 1393kB time=00:00:46.88 bitrate= 243.4kbits/s dup=0 drop=5 speed=0.989x ... (lots of similar messages) ... frame= 1761 fps= 25 q=28.0 size= 2030kB time=00:01:08.80 bitrate= 241.7kbits/s dup=0 drop=5 speed=0.993x frame= 1774 fps= 25 q=28.0 size= 2041kB time=00:01:09.32 bitrate= 241.2kbits/s dup=0 drop=5 speed=0.993x [flv @ 0x3884900] Failed to update header with correct duration.
[flv @ 0x3884900] Failed to update header with correct filesize.
frame= 1782 fps= 25 q=-1.0 Lsize= 2127kB time=00:01:11.64 bitrate= 243.2kbits/s dup=0 drop=5 speed=1.02x video:2092kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 1.679417%
[libx264 @ 0x38843c0] frame I:8 Avg QP:16.89 size: 42446
[libx264 @ 0x38843c0] frame P:1672 Avg QP:19.54 size: 1065
[libx264 @ 0x38843c0] frame B:102 Avg QP:23.00 size: 205
[libx264 @ 0x38843c0] consecutive B-frames: 92.4% 0.0% 0.0% 7.6%
[libx264 @ 0x38843c0] mb I I16..4: 12.9% 36.2% 50.9%
[libx264 @ 0x38843c0] mb P I16..4: 0.2% 0.2% 0.0% P16..4: 16.7% 0.7% 1.0% 0.0% 0.0% skip:81.1%
[libx264 @ 0x38843c0] mb B I16..4: 0.1% 0.1% 0.0% B16..8: 11.7% 0.1% 0.0% direct: 1.5% skip:86.5% L0:62.2% L1:35.3% BI: 2.5%
[libx264 @ 0x38843c0] 8x8 transform intra:40.8% inter:47.4%
[libx264 @ 0x38843c0] coded y,uvDC,uvAC intra: 46.5% 53.0% 17.2% inter: 3.9% 8.7% 0.0%
[libx264 @ 0x38843c0] i16 v,h,dc,p: 21% 56% 8% 15%
[libx264 @ 0x38843c0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 23% 33% 31% 1% 2% 3% 2% 2% 3%
[libx264 @ 0x38843c0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 25% 39% 9% 3% 3% 4% 5% 3% 8%
[libx264 @ 0x38843c0] i8c dc,h,v,p: 43% 33% 21% 3%
[libx264 @ 0x38843c0] Weighted P-Frames: Y:0.0% UV:0.0%
[libx264 @ 0x38843c0] ref P L0: 88.0% 1.4% 6.6% 4.0%
[libx264 @ 0x38843c0] ref B L0: 99.4% 0.5% 0.1%
[libx264 @ 0x38843c0] ref B L1: 99.4% 0.6%
[libx264 @ 0x38843c0] kb/s:238.73

The camera is pretty cheap (from China) so it's likely I'm getting bad data from it or it's cutting out for a few seconds at a time. Ideally I would need ffmpeg to handle this well (ignore bad data, wait as long as necessary for good data to resume encoding)

Can't read video using VideoCapture in Opencv

$
0
0

I have installed Opencv 2.4.13.6 at my Ubuntu 16.04 OS. I have ffmpeg and during Opencv installation I made WITH_FFMPEG ON. My ffmpeg is working. If I type ffmpeg at command window, I have

ffmpeg version N-90982-gb995ec0 Copyright (c) 2000-2018 the FFmpeg developers built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.9) 20160609 configuration: --prefix=/home/nyan/ffmpeg_build --enable-shared --extra-cflags=-I/home/nyan/ffmpeg_build/include --extra-ldflags=-L/home/nyan/ffmpeg_build/lib --extra-libs='-lpthread -lm' --bindir=/home/nyan/bin --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-nonfree libavutil 56. 18.100 / 56. 18.100 libavcodec 58. 19.100 / 58. 19.100 libavformat 58. 13.101 / 58. 13.101 libavdevice 58. 4.100 / 58. 4.100 libavfilter 7. 21.100 / 7. 21.100 libswscale 5. 2.100 / 5. 2.100 libswresample 3. 2.100 / 3. 2.100 libpostproc 55. 2.100 / 55. 2.100
Hyper fast Audio and Video encoder
usage: ffmpeg [options] [[infile options] -i infile]... {[outfile options] outfile}...

Then I have put ffmpeg paths to .bashrc as

export PATH=/home/bin${PATH:+:${PATH}}
export PATH=/home/ffmpeg_build${PATH:+:${PATH}}
export PATH=/home/ffmpeg_build/include${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/home/ffmpeg_build/lib${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}

In my Opencv libraries I have libopencv_video.so. So video input/output should be fine.

My following program gives me "can't read video". What could be the reason?

I tried VideoCapture cap(0); gives me same error. What is wrong?

#include 
using namespace cv;
using namespace std; int main(void){ VideoCapture cap("IMG_5715.MOV"); // open the default camera if(!cap.isOpened()) // check if we succeeded { cout <<"can't read video"<< endl; return -1; } while(1){ Mat frame; // Capture frame-by-frame cap >> frame; imshow( "Frame", frame ); waitKey(1); // If the frame is empty, break immediately if (frame.empty()) break; } cap.release(); return 0;
}

How to fetch both live video frame and timestamp from ffmpeg to python on Windows

$
0
0

Searching for an alternative as OpenCV would not provide timestamps for live camera stream (on Windows), which are required in my computer vision algorithm, I found ffmpeg and this excellent article https://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/ The solution uses ffmpeg, accessing its standard output (stdout) stream. I extended it to read the standard error (stderr) stream as well.

Working up the python code on windows, while I received the video frames from ffmpeg stdout, but the stderr freezes after delivering the showinfo videofilter details (timestamp) for first frame.

I recollected seeing on ffmpeg forum somewhere that the video filters like showinfo are bypassed when redirected. Is this why the following code does not work as expected?

Expected: It should write video frames to disk as well as print timestamp details.
Actual: It writes video files but does not get the timestamp (showinfo) details.

Here's the code I tried:

import subprocess as sp
import numpy
import cv2 command = [ 'ffmpeg', '-i', 'e:\sample.wmv', '-pix_fmt', 'rgb24', '-vcodec', 'rawvideo', '-vf', 'showinfo', # video filter - showinfo will provide frame timestamps '-an','-sn', #-an, -sn disables audio and sub-title processing respectively '-f', 'image2pipe', '-'] # we need to output to a pipe pipe = sp.Popen(command, stdout = sp.PIPE, stderr = sp.PIPE) # TODO someone on ffmpeg forum said video filters (e.g. showinfo) are bypassed when stdout is redirected to pipes??? for i in range(10): raw_image = pipe.stdout.read(1280*720*3) img_info = pipe.stderr.read(244) # 244 characters is the current output of showinfo video filter print "showinfo output", img_info image1 = numpy.fromstring(raw_image, dtype='uint8') image2 = image1.reshape((720,1280,3)) # write video frame to file just to verify videoFrameName = 'Video_Frame{0}.png'.format(i) cv2.imwrite(videoFrameName,image2) # throw away the data in the pipe's buffer. pipe.stdout.flush() pipe.stderr.flush()

So how to still get the frame timestamps from ffmpeg into python code so that it can be used in my computer vision algorithm...

-- https://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/

lavfi/vf_srcnn: use avio_check instead of access

$
0
0
lavfi/vf_srcnn: use avio_check instead of access The filter uses avio for file access already, and avio_check is
portable. Fixes trac #7192.
  • [DH] configure
  • [DH] libavfilter/vf_srcnn.c

Get correct framerate from ffmpeg

$
0
0

Good day,

I have a problem. I need to get correct framerate from ffmpeg libs..

I tried to use

pFormatCtx->streams[videoStream]->avg_frame_rate.num

return of avg_frame_rate is 2997. But when I dumped meta info, I got:

Input #0, avi, from '/test.avi': Metadata: encoder : MEncoder SVN-r33883(20110719-gcc4.5.2) Duration: 00:49:47.70, start: 0.000000, bitrate: 1294 kb/s Stream #0:0: Video: mpeg4 (Advanced Simple Profile) (XVID / 0x44495658), yuv420p, 856x480 [SAR 1:1 DAR 107:60], 1090 kb/s, SAR 491520:492521 DAR 8192:4603, 23.98 fps, 23.98 tbr, 23.98 tbn, 23.98 tbc Stream #0:1: Audio: mp3 (U[0][0][0] / 0x0055), 48000 Hz, stereo, s16p, 192 kb/s
2015-09-20 15:47:02.377 TV3[21607:769601] ready to start audio

sample rate is: 23.98fps. What value is correct and why are they different?

real time decoding of SHVC bit streams

$
0
0

Does anyone know an open source decoder that can perform real time SHVC bit stream decoding?. The openHEVC states that it has the capability to decode HEVC scalable bit streams, but I was not able to decode a SHVC bit stream generated by SHM 7.0 reference encoder.

Also, does the ffmpeg support scalable extension of HEVC?.

Thanks.


FFMPEG: Too many packets buffered for output stream 0:1

$
0
0

I want to add a logo to a video using FFMPEG. I encountered this error: "Too many packets buffered for output stream 0:1.", "Conversion Failed.". I tried with diffent pictures and videos, always got the same error. Google didn't help much either. I found a thread

C:\Users\Anwender\OneDrive - IT-Center Engels\_Programmierung & Scripting\delphi\_ITCE\Tempater\Win32\Debug\ffmpeg\bin>ffmpeg ^
Mehr? -i C:\Users\Anwender\Videos\CutErgebnis.mp4 ^
Mehr? -i C:\Users\Anwender\Pictures\pic.png ^
Mehr? -filter_complex "overlay=0:0" ^
Mehr? C:\Users\Anwender\Videos\Logo.mp4
ffmpeg version N-90054-g474194a8d0 Copyright (c) 2000-2018 the FFmpeg developers built with gcc 7.2.0 (GCC) configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-bzlib --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libmfx --enable-amf --enable-cuda --enable-cuvid --enable-d3d11va --enable-nvenc --enable-dxva2 --enable-avisynth libavutil 56. 7.101 / 56. 7.101 libavcodec 58. 11.101 / 58. 11.101 libavformat 58. 9.100 / 58. 9.100 libavdevice 58. 1.100 / 58. 1.100 libavfilter 7. 12.100 / 7. 12.100 libswscale 5. 0.101 / 5. 0.101 libswresample 3. 0.101 / 3. 0.101 libpostproc 55. 0.100 / 55. 0.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'C:\Users\Anwender\Videos\CutErgebnis.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2avc1mp41 encoder : Lavf58.9.100 comment : Captured with Snagit 13.1.3.7993 : Microphone - Mikrofon (Steam Streaming Microphone) : Duration: 00:01:51.99, start: 0.015011, bitrate: 148 kb/s Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 1918x718 [SAR 1:1 DAR 959:359], 149 kb/s, 14.79 fps, 15 tbr, 15k tbn, 30 tbc (default) Metadata: handler_name : VideoHandler Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 1 kb/s (default) Metadata: handler_name : SoundHandler
Input #1, png_pipe, from 'C:\Users\Anwender\Pictures\pic.png': Duration: N/A, bitrate: N/A Stream #1:0: Video: png, pal8(pc), 400x400, 25 tbr, 25 tbn, 25 tbc
File 'C:\Users\Anwender\Videos\Logo.mp4' already exists. Overwrite ? [y/N] y
Stream mapping: Stream #0:0 (h264) -> overlay:main (graph 0) Stream #1:0 (png) -> overlay:overlay (graph 0) overlay (graph 0) -> Stream #0:0 (libx264) Stream #0:1 -> #0:1 (aac (native) -> aac (native))
Press [q] to stop, [?] for help
Too many packets buffered for output stream 0:1.
[aac @ 000001f4c5257a40] Qavg: 65305.387
[aac @ 000001f4c5257a40] 2 frames left in the queue on closing
Conversion failed!

My FFMPEG Version: ffmpeg-20180322-ed0e0fe-win64-static

Details about the Video:

 C:\Users\Anwender\OneDrive - IT-Center Engels\_Programmierung & Scripting\delphi\_ITCE\Tempater\Win32\Debug\ffmpeg-20180322-ed0e0fe-win64-static\bin>ffprobe.exe C:\Users\Anwender\Videos\CutErgebnis.mp4
ffprobe version N-90399-ged0e0fe102 Copyright (c) 2007-2018 the FFmpeg developers built with gcc 7.3.0 (GCC) configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-bzlib --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth libavutil 56. 11.100 / 56. 11.100 libavcodec 58. 15.100 / 58. 15.100 libavformat 58. 10.100 / 58. 10.100 libavdevice 58. 2.100 / 58. 2.100 libavfilter 7. 13.100 / 7. 13.100 libswscale 5. 0.102 / 5. 0.102 libswresample 3. 0.101 / 3. 0.101 libpostproc 55. 0.100 / 55. 0.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'C:\Users\Anwender\Videos\CutErgebnis.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2avc1mp41 encoder : Lavf58.9.100 comment : Captured with Snagit 13.1.3.7993 : Microphone - Mikrofon (Steam Streaming Microphone) : Duration: 00:01:51.99, start: 0.015011, bitrate: 148 kb/s Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 1918x718 [SAR 1:1 DAR 959:359], 149 kb/s, 14.79 fps, 15 tbr, 15k tbn, 30 tbc (default) Metadata: handler_name : VideoHandler Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 1 kb/s (default) Metadata: handler_name : SoundHandler

ffmpeg cannot generate thumbnail

$
0
0

In Windows to make a thumbnail with ffmpeg I use:

./ffmpeg -i 1.mp4 -ss 00:00:01 -f image2 1.jpg

or:

./ffmpeg -ss 00:00:01 -i 1.mp4 -f image2 1.jpg

But none can generate a thumbnail, it displays:

ffmpeg -i input.mp4 -ss 1 -frames:v 1 output.jpg
ffmpeg version N-91013-g8007a86363 Copyright (c) 2000-2018 the FFmpeg developers built with gcc 7.3.1 (GCC) 20180406 configuration: libavutil 56. 18.100 / 56. 18.100 libavcodec 58. 19.101 / 58. 19.101 libavformat 58. 13.102 / 58. 13.102 libavdevice 58. 4.100 / 58. 4.100 libavfilter 7. 21.100 / 7. 21.100 libswscale 5. 2.100 / 5. 2.100 libswresample 3. 2.100 / 3. 2.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input.mp4': Metadata: major_brand : isom minor_version : 1 compatible_brands: isom creation_time : 2017-01-11T08:30:55.000000Z encoder : My MP4Box GUI 0.6.0.6  Duration: 00:03:27.93, start: 0.000000, bitrate: 9345 kb/s Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 2560x1440 [SAR 1:1 DAR 16:9], 9216 kb/s, 30 fps, 30 tbr, 90k tbn, 60 tbc (default) Metadata: creation_time : 2016-10-14T14:16:02.000000Z handler_name : videoplayback.mp4 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 125 kb/s (default) Metadata: creation_time : 2017-01-11T08:30:56.000000Z handler_name : videoplayback (1).m4a
Stream mapping: Stream #0:0 -> #0:0 (h264 (native) -> mjpeg (native))
Press [q] to stop, [?] for help
[swscaler @ 0x55ad99ffe1c0] deprecated pixel format used, make sure you did set range correctly
Output #0, image2, to 'output.jpg': Metadata: major_brand : isom minor_version : 1 compatible_brands: isom encoder : Lavf58.13.102 Stream #0:0(und): Video: mjpeg, yuvj420p(pc), 2560x1440 [SAR 1:1 DAR 16:9], q=2-31, 200 kb/s, 30 fps, 30 tbn, 30 tbc (default) Metadata: creation_time : 2016-10-14T14:16:02.000000Z handler_name : videoplayback.mp4 encoder : Lavc58.19.101 mjpeg Side data: cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1
frame= 0 fps=0.0 q=0.0 Lsize=N/A time=00:00:00.00 bitrate=N/A speed= 0x video:0kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Output file is empty, nothing was encoded (check -ss / -t / -frames parameters if used)

and I find that the video which encoder is

 My MP4Box GUI 0.6.0.6 

the ffmpeg cannot generate thumbnail. And how can I generate the thumbnail?

ffmpeg set timecode offset in output

$
0
0

I'm trying to create a .ts-file with a timecode starting at a specific offset. Lets say an input file input.ts exists. Running ffprobe on it says "start: 8636.xxx". Now, I like to create a copy with an additional start time offset, using:

ffmpeg -i input.ts -someoption output.ts

The options known to me for manipulating the time, like -copyts -ss -timecode

won't work. Is there an option which allows me to add an extra time offset to the video stream?

Edit:

Here is the ffprobe output of the original ts file:

Duration: 00:06:03.52, start: 6204.163600, bitrate: 3880 kb/s Program 12103 Metadata: service_name : ?ProSieben service_provider: ?Unitymedia Stream #0:0[0x21f]: Video: mpeg2video (Main) ([2][0][0][0] / 0x0002), yuv420p(tv, top first), 720x576 [SAR 64:45 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc Stream #0:1[0x220](ger): Audio: mp2 ([4][0][0][0] / 0x0004), 48000 Hz, stereo, s16p, 192 kb/s Stream #0:2[0x222](ger): Audio: ac3 ([129][0][0][0] / 0x0081), 48000 Hz, stereo, fltp, 384 kb/s No Program Stream #0:3[0x224]: Subtitle: dvb_teletext
Unsupported codec with id 94215 for input stream 3

And here is the ffprobe output of the newly created file after running ffmpeg -i input.ts -copyts -output_ts_offset 2428.6 output.ts:

Duration: 00:06:03.36, start: 8634.319544, bitrate: 4372 kb/s Program 1 Metadata: service_name : Service01 service_provider: FFmpeg Stream #0:0[0x100]: Video: mpeg2video (Main) ([2][0][0][0] / 0x0002), yuv420p(tv, progressive), 720x576 [SAR 64:45 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc Stream #0:1[0x101](ger): Audio: mp2 ([3][0][0][0] / 0x0003), 48000 Hz, stereo, s16p, 384 kb/s

I'm don't know much about the stream format itself. However, I can see, that the newly created output file contains a lesser number of streams and that some details have changed, like "tv, progessiv" instead of "tv, top first".

I'd like to have an exact copy of the original, except having other time stamps. Is that possible?

You can generate a video with audio and videos clip with ffmpeg

$
0
0

I am making an application in which I have to generate a single video clip of x duration, I was recommended to use the ffmpeg but I do not know if it could be created and if possible how the structure would be, since I understand that you need a file with a structure in which the path is established in this case videos and audio, I am working on codeigniter with php 7, I have already executed the ffmpeg to change the format of a video.

The question is the following, from a database I bring the list of the route of the videos its duration and weight, equal with the audios, the thing is how I can do to create the video with ffmpeg from this.

Passing arguments to FFMPEG using subprocess.call()

$
0
0

I was working through this answer to an FFMPEG question and the command works just fine through the Windows 10 command prompt (I've only changed the input and output filenames):

ffmpeg -i test.mp4 -filter:v "select='gt(scene,0.4)',showinfo" -f null - 2> test.txt

My Python 3 script gives arguments (as a list) to the subprocess.call() function and works fine for a number of basic FFMPEG operations, but not this one! It seems to be failing at the final null - 2> test.txt part, with the following error messages depending on how I split the arguments:

[NULL @ 000001c7e556a3c0] [error] Unable to find a suitable output format for 'pipe:'
[error] pipe:: Invalid argument [error] Unrecognized option '2> test.txt'.
[fatal] Error splitting the argument list: Option not found [error] Unrecognized option '2>'.
[fatal] Error splitting the argument list: Option not found

Here's the basic list of arguments I've been trying:

args=['C:\\Program Files\\ffmpeg\\ffmpeg.exe', '-i', 'test.mp4', '-filter:v "select=\'gt(scene,0.4)\',showinfo"', '-f null', '-', '2>', 'test.txt']

Plus various permutations combining and splitting the last few elements.

Please could somebody help me with the right syntax for running FFMPEG with these arguments through Python 3?

Many thanks - I just can't see where I'm going wrong :(

Viewing all 117737 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>