Quantcast
Channel: MediaSPIP
Viewing all 117701 articles
Browse latest View live

ffmpeg - converting webm videos generated by Chrome is slow

$
0
0

I generate webm files in two different ways. One using Chrome WebRTC MediaRecorder, the other one is using a js library which generates the webm video frame by frame (webm-writer-js). The file size of the videos generated is not that different, the fast one is 60% of the slow one but the difference in speed is 1000%

Using the basic ffmpeg syntax -i input.webm output.mp4 the files created with Chrome's media recorder take in fact almost 10x time to be converted. The conversion logs differ slightly but overall look very similar to my novice eyes. On the left the fast conversion and on the right the slow one.

enter image description here

The fast one throws a little error but the conversion seems successful. In the slow conversion you can see many frames processed, in the fast one as if there was only one (very fast). Using -preset veryfast cuts the speed time by half to both but the loss of quality is visible.

Any idea how I could speed up the conversion for the videos generated by Chrome without compromising much in quality? Thanks a lot!

-- webm-writer-js, enter image description here

Convert audio files to mp3 using ffmpeg

$
0
0

I need to convert audio files to mp3 using ffmpeg.

When i write the command as ffmpeg -i audio.ogg -acodec mp3 newfile.mp3, I get the error:

FFmpeg version 0.5.2, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 1 / 52.20. 1 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 built on Jun 24 2010 14:56:20, gcc: 4.4.1
Input #0, mp3, from 'ZHRE.mp3': Duration: 00:04:12.52, start: 0.000000, bitrate: 208 kb/s Stream #0.0: Audio: mp3, 44100 Hz, stereo, s16, 256 kb/s
Output #0, mp3, to 'audio.mp3': Stream #0.0: Audio: 0x0000, 44100 Hz, stereo, s16, 64 kb/s
Stream mapping: Stream #0.0 -> #0.0
Unsupported codec for output stream #0.0

I also ran this command :

 ffmpeg -formats | grep mp3

and got this in response:

FFmpeg version 0.5.2, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 1 / 52.20. 1 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 built on Jun 24 2010 14:56:20, gcc: 4.4.1 DE mp3 MPEG audio layer 3 D A mp3 MP3 (MPEG audio layer 3) D A mp3adu ADU (Application Data Unit) MP3 (MPEG audio layer 3) D A mp3on4 MP3onMP4 text2movsub remove_extra noise mov2textsub mp3decomp mp3comp mjpegadump imxdump h264_mp4toannexb dump_extra

I guess that the mp3 codec isnt installed. Am I right here ? Can anyone help me out here ?

Writing frames from camera using skvideo.io.FFmpegWriter

$
0
0

I'm trying to finely control the video encoding of camera image frames captured on the fly using skvideo.io.FFmpegWriter and cv2.VideoCapture, e.g.

from skvideo import io
import cv2 fps = 60
stream = cv2.VideoCapture(0) # 0 is for /dev/video0
print("fps: {}".format(stream.set(cv2.CAP_PROP_FPS, fps))) stream.set(cv2.CAP_PROP_FRAME_WIDTH, 1920)
stream.set(cv2.CAP_PROP_FRAME_HEIGHT, 1080)
print("bit_depth: {}".format(stream.set(cv2.CAP_PROP_FORMAT, cv2.CV_8U))) video = io.FFmpegWriter('/tmp/test_ffmpeg.avi', inputdict={'-r': fps, '-width': 1920, '-height': 1080}, outputdict={'-r': fps, '-vcodec': 'libx264', '-pix_fmt': 'h264'}
) try: for i in range(fps*10): # 10s of video ret, frame = stream.read() video.writeFrame(frame)
finally: stream.release() try: video.close()
except: pass

However, I get the following exception (in Jupyter notebook):

---------------------------------------------------------------------------
TypeError Traceback (most recent call last)  in () 18 while range(fps*10): 19 ret, frame = stream.read()
---> 20 video.writeFrame(frame) 21 except BaseException as err: 22 raise err /usr/local/lib/python3.6/site-packages/skvideo/io/ffmpeg.py in writeFrame(self, im) 446 T, M, N, C = vid.shape 447 if not self.warmStarted:
--> 448 self._warmStart(M, N, C) 449 450 # Ensure that ndarray image is in uint8 /usr/local/lib/python3.6/site-packages/skvideo/io/ffmpeg.py in _warmStart(self, M, N, C) 412 cmd = [_FFMPEG_PATH + "/" + _FFMPEG_APPLICATION, "-y"] + iargs + ["-i", "-"] + oargs + [self._filename] 413 --> 414 self._cmd = "".join(cmd) 415 416 # Launch process TypeError: sequence item 3: expected str instance, int found

Changing this to video.writeFrame(frame.tostring()) results in ValueError: Improper data input, leaving me stumped.

How should I go about writing each frame (as returned by OpenCV) to my FFmpegWriter instance?

EDIT

The code works fine if I remove inputdict and outputdict from the io.FFmpegWriter call, however this defeats the purpose for me as I need fine control over the video encoding (I am experimenting with lossless/near-lossless compression of the raw video captured from the camera and trying to establish the best compromise in terms of compression vs fidelity for my needs).

C++ FFmpeg create mp4 file

$
0
0

I'm trying to create mp4 video file with FFmpeg and C++, but in result I receive broken file (windows player shows "Can't play ... 0xc00d36c4"). If I create .h264 file, it can be played with 'ffplay' and successfully converted to mp4 via CL.

My code:

int main() { char *filename = "tmp.mp4"; AVOutputFormat *fmt; AVFormatContext *fctx; AVCodecContext *cctx; AVStream *st; av_register_all(); avcodec_register_all(); //auto detect the output format from the name fmt = av_guess_format(NULL, filename, NULL); if (!fmt) { cout <<"Error av_guess_format()"<< endl; system("pause"); exit(1); } if (avformat_alloc_output_context2(&fctx, fmt, NULL, filename) < 0) { cout <<"Error avformat_alloc_output_context2()"<< endl; system("pause"); exit(1); } //stream creation + parameters st = avformat_new_stream(fctx, 0); if (!st) { cout <<"Error avformat_new_stream()"<< endl; system("pause"); exit(1); } st->codecpar->codec_id = fmt->video_codec; st->codecpar->codec_type = AVMEDIA_TYPE_VIDEO; st->codecpar->width = 352; st->codecpar->height = 288; st->time_base.num = 1; st->time_base.den = 25; AVCodec *pCodec = avcodec_find_encoder(st->codecpar->codec_id); if (!pCodec) { cout <<"Error avcodec_find_encoder()"<< endl; system("pause"); exit(1); } cctx = avcodec_alloc_context3(pCodec); if (!cctx) { cout <<"Error avcodec_alloc_context3()"<< endl; system("pause"); exit(1); } avcodec_parameters_to_context(cctx, st->codecpar); cctx->bit_rate = 400000; cctx->width = 352; cctx->height = 288; cctx->time_base.num = 1; cctx->time_base.den = 25; cctx->gop_size = 12; cctx->pix_fmt = AV_PIX_FMT_YUV420P; if (st->codecpar->codec_id == AV_CODEC_ID_H264) { av_opt_set(cctx->priv_data, "preset", "ultrafast", 0); } if (fctx->oformat->flags & AVFMT_GLOBALHEADER) { cctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER; } avcodec_parameters_from_context(st->codecpar, cctx); av_dump_format(fctx, 0, filename, 1); //OPEN FILE + WRITE HEADER if (avcodec_open2(cctx, pCodec, NULL) < 0) { cout <<"Error avcodec_open2()"<< endl; system("pause"); exit(1); } if (!(fmt->flags & AVFMT_NOFILE)) { if (avio_open(&fctx->pb, filename, AVIO_FLAG_WRITE) < 0) { cout <<"Error avio_open()"<< endl; system("pause"); exit(1); } } if (avformat_write_header(fctx, NULL) < 0) { cout <<"Error avformat_write_header()"<< endl; system("pause"); exit(1); } //CREATE DUMMY VIDEO AVFrame *frame = av_frame_alloc(); frame->format = cctx->pix_fmt; frame->width = cctx->width; frame->height = cctx->height; av_image_alloc(frame->data, frame->linesize, cctx->width, cctx->height, cctx->pix_fmt, 32); AVPacket pkt; double video_pts = 0; for (int i = 0; i < 50; i++) { video_pts = (double)cctx->time_base.num / cctx->time_base.den * 90 * i; for (int y = 0; y < cctx->height; y++) { for (int x = 0; x < cctx->width; x++) { frame->data[0][y * frame->linesize[0] + x] = x + y + i * 3; if (y < cctx->height / 2 && x < cctx->width / 2) { /* Cb and Cr */ frame->data[1][y * frame->linesize[1] + x] = 128 + y + i * 2; frame->data[2][y * frame->linesize[2] + x] = 64 + x + i * 5; } } } av_init_packet(&pkt); pkt.flags |= AV_PKT_FLAG_KEY; pkt.pts = frame->pts = video_pts; pkt.data = NULL; pkt.size = 0; pkt.stream_index = st->index; if (avcodec_send_frame(cctx, frame) < 0) { cout <<"Error avcodec_send_frame()"<< endl; system("pause"); exit(1); } if (avcodec_receive_packet(cctx, &pkt) == 0) { //cout <<"Write frame "<< to_string((int) pkt.pts) << endl; av_interleaved_write_frame(fctx, &pkt); av_packet_unref(&pkt); } } //DELAYED FRAMES for (;;) { avcodec_send_frame(cctx, NULL); if (avcodec_receive_packet(cctx, &pkt) == 0) { //cout <<"-Write frame "<< to_string((int)pkt.pts) << endl; av_interleaved_write_frame(fctx, &pkt); av_packet_unref(&pkt); } else { break; } } //FINISH av_write_trailer(fctx); if (!(fmt->flags & AVFMT_NOFILE)) { if (avio_close(fctx->pb) < 0) { cout <<"Error avio_close()"<< endl; system("pause"); exit(1); } } av_frame_free(&frame); avcodec_free_context(&cctx); avformat_free_context(fctx); system("pause"); return 0;
}

Output of program:

Output #0, mp4, to 'tmp.mp4': Stream #0:0: Video: h264, yuv420p, 352x288, q=2-31, 400 kb/s, 25 tbn
[libx264 @ 0000021c4a995ba0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0000021c4a995ba0] profile Constrained Baseline, level 2.0
[libx264 @ 0000021c4a995ba0] 264 - core 152 r2851 ba24899 - H.264/MPEG-4 AVC codec - Copyleft 2003-2017 - http://www.videolan.org/x264.html - options: cabac=0 ref=1 deblock=0:0:0 analyse=0:0 me=dia subme=0 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0 keyint=12 keyint_min=1 scenecut=0 intra_refresh=0 rc=abr mbtree=0 bitrate=400 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=0
[libx264 @ 0000021c4a995ba0] frame I:5 Avg QP: 7.03 size: 9318
[libx264 @ 0000021c4a995ba0] frame P:45 Avg QP: 4.53 size: 4258
[libx264 @ 0000021c4a995ba0] mb I I16..4: 100.0% 0.0% 0.0%
[libx264 @ 0000021c4a995ba0] mb P I16..4: 0.0% 0.0% 0.0% P16..4: 100.0% 0.0% 0.0% 0.0% 0.0% skip: 0.0%
[libx264 @ 0000021c4a995ba0] final ratefactor: 9.11
[libx264 @ 0000021c4a995ba0] coded y,uvDC,uvAC intra: 18.9% 21.8% 14.5% inter: 7.8% 100.0% 15.5%
[libx264 @ 0000021c4a995ba0] i16 v,h,dc,p: 4% 5% 5% 86%
[libx264 @ 0000021c4a995ba0] i8c dc,h,v,p: 2% 9% 6% 82%
[libx264 @ 0000021c4a995ba0] kb/s:264.68

If I will try to play mp4 file with 'ffplay' it prints:

[mov,mp4,m4a,3gp,3g2,mj2 @ 00000000026bf900] Could not find codec parameters for stream 0 (Video: h264 (avc1 / 0x31637661), none, 352x288, 138953 kb/s): unspecified pixel format
[h264 @ 00000000006c6ae0] non-existing PPS 0 referenced
[h264 @ 00000000006c6ae0] decode_slice_header error
[h264 @ 00000000006c6ae0] no frame!

I've spent a lot of time without success of finding issue, what could be the reason of it?

Thank for help!

ffmpeg: make loglevel verbose for frame duration warning

$
0
0
ffmpeg: make loglevel verbose for frame duration warning
  • [DH] fftools/ffmpeg.c

Does PTS have to start at 0?

$
0
0

I've seen a number of questions regarding video PTS values not starting at zero, or asking how to make them start at zero. I'm aware that using ffmpeg I can do something like ffmpeg -i to fix this kind of thing

However it's my understanding that PTS values don't have to start at zero. For instance, if you join a live stream then odds are it has been going on for an hour and the PTS is already somewhere around 3600000+ but your video player faithfully displays everything just fine. Therefore I would expect there to be no problem if I intentionally created a video with a PTS value starting at, say, the current wall clock time.

I want to send a live stream using ffmpeg, but embed the current time into the stream. This can be used both for latency calculation while the stream is live, and later to determine when the stream was originally aired. From my understanding of PTS, something as simple as this should probably work:

ffmpeg -i video.flv -vf="setpts=RTCTIME" rtmp://

When I try this, however, ffmpeg outputs the following:

frame= 93 fps= 20 q=-1.0 Lsize= 9434kB time=535020:39:58.70 bitrate= 0.0kbits/s speed=1.35e+11x

Note the extremely large value for "time", the bitrate (0.0kbits), and the speed (135000000000x!!!)

At first I thought the issue might be my timebase, so I tried the following:

ffmpeg -i video.flv -vf="settb=1/1K,setpts=RTCTIME/1K" rtmp://

This puts everything in terms of milliseconds (1 PTS = 1 ms) but I had the same issue (massive time, zero bitrate, and massive speed)

Am I misunderstanding something about PTS? Is it not allowed to start at non-zero values? Or am I just doing something wrong?

ffmpeg output to opencv

$
0
0

I am currently writing a simple script to pipe the output of ffmpeg to opencv, but it keeps yelling me error.

My entire command line is

ffmpeg -flags output_corrupt -analyzeduration 32 -probesize 32 -i temp_file.h264 -updatefirst 1 -y -qscale:v 2 -vf scale="240:-1" -f image2pipe - | python cap.py

My code for the python script is below

import sys
import cv2 as cv
import numpy as np while True: img = sys.stdin # print img # img = cv.imdecode(img, 1) if img is not None: cv.imshow("Video", img) cv.waitkey(1) else: print "No image"

After I execute the command line, I got the following messages:

ffmpeg version 3.4.2-2 Copyright (c) 2000-2018 the FFmpeg developers built with gcc 7 (Ubuntu 7.3.0-16ubuntu2) configuration: --prefix=/usr --extra-version=2 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared libavutil 55. 78.100 / 55. 78.100 libavcodec 57.107.100 / 57.107.100 libavformat 57. 83.100 / 57. 83.100 libavdevice 57. 10.100 / 57. 10.100 libavfilter 6.107.100 / 6.107.100 libavresample 3. 7. 0 / 3. 7. 0 libswscale 4. 8.100 / 4. 8.100 libswresample 2. 9.100 / 2. 9.100 libpostproc 54. 7.100 / 54. 7.100
[h264 @ 0x557ea6b01a60] Stream #0: not enough frames to estimate rate; consider increasing probesize
[h264 @ 0x557ea6b01a60] decoding for stream 0 failed
Input #0, h264, from 'temp_file.h264': Duration: N/A, bitrate: N/A Stream #0:0: Video: h264 (High), yuvj420p(pc, progressive), 1280x720, 29.97 tbr, 1200k tbn, 59.94 tbc
Stream mapping: Stream #0:0 -> #0:0 (h264 (native) -> mpeg2video (native))
Press [q] to stop, [?] for help
[h264 @ 0x557ea6b22a60] error while decoding MB 54 2, bytestream -16
[h264 @ 0x557ea6b22a60] top block unavailable for requested intra mode -1
[h264 @ 0x557ea6b22a60] error while decoding MB 2 9, bytestream 6907
[h264 @ 0x557ea6b22a60] concealing 1275 DC, 1275 AC, 1275 MV errors in P frame
[h264 @ 0x557ea6b28740] top block unavailable for requested intra mode -1
[h264 @ 0x557ea6b28740] error while decoding MB 27 0, bytestream 5564
[h264 @ 0x557ea6b28740] deblocking filter parameters 7 13 out of range
[h264 @ 0x557ea6b28740] decode_slice_header error
[h264 @ 0x557ea6b28740] concealing 1440 DC, 1440 AC, 1440 MV errors in P frame
[h264 @ 0x557ea6ba5be0] deblocking filter parameters 7 -14 out of range
[h264 @ 0x557ea6ba5be0] decode_slice_header error
[swscaler @ 0x557ea7159ba0] deprecated pixel format used, make sure you did set range correctly
[mpeg2video @ 0x557ea6b2ecc0] too many threads/slices (10), reducing to 9
[h264 @ 0x557ea6ba5be0] concealing 1387 DC, 1387 AC, 1387 MV errors in P frame
Output #0, mpegts, to 'pipe:': Metadata: encoder : Lavf57.83.100 Stream #0:0: Video: mpeg2video (Main), yuv420p, 240x135, q=2-31, 200 kb/s, 29.97 fps, 90k tbn, 29.97 tbc Metadata: encoder : Lavc57.107.100 mpeg2video Side data: cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1
[h264 @ 0x557ea6bc1b60] deblocking filter parameters -7 0 out of range
[h264 @ 0x557ea6bc1b60] decode_slice_header error
[h264 @ 0x557ea6bc1b60] concealing 1413 DC, 1413 AC, 1413 MV errors in P frame
[h264 @ 0x557ea6bddae0] top block unavailable for requested intra mode -1
[h264 @ 0x557ea6bddae0] error while decoding MB 9 0, bytestream 6647
[h264 @ 0x557ea6bddae0] concealing 1186 DC, 1186 AC, 1186 MV errors in P frame
[h264 @ 0x557ea6bf9a60] top block unavailable for requested intra mode
[h264 @ 0x557ea6bf9a60] error while decoding MB 16 9, bytestream 6182
[h264 @ 0x557ea6bf9a60] top block unavailable for requested intra mode -1
[h264 @ 0x557ea6bf9a60] error while decoding MB 6 18, bytestream 7043
[h264 @ 0x557ea6bf9a60] concealing 1440 DC, 1440 AC, 1440 MV errors in P frame
[h264 @ 0x557ea6c159e0] top block unavailable for requested intra mode -1
[h264 @ 0x557ea6c159e0] error while decoding MB 12 9, bytestream 6863
[h264 @ 0x557ea6c159e0] concealing 1342 DC, 1342 AC, 1342 MV errors in P frame
[h264 @ 0x557ea6c31d40] top block unavailable for requested intra mode
[h264 @ 0x557ea6c31d40] error while decoding MB 29 18, bytestream 6611
[h264 @ 0x557ea6c31d40] concealing 1405 DC, 1405 AC, 1405 MV errors in P frame
[h264 @ 0x557ea6c4e0a0] top block unavailable for requested intra mode
[h264 @ 0x557ea6c4e0a0] error while decoding MB 25 9, bytestream 6051
[h264 @ 0x557ea6c4e0a0] top block unavailable for requested intra mode -1
[h264 @ 0x557ea6c4e0a0] error while decoding MB 70 18, bytestream 5088
[h264 @ 0x557ea6c4e0a0] concealing 1419 DC, 1419 AC, 1419 MV errors in P frame
[h264 @ 0x557ea6c6a400] top block unavailable for requested intra mode
[h264 @ 0x557ea6c6a400] error while decoding MB 14 9, bytestream 6071
[h264 @ 0x557ea6c6a400] deblocking filter parameters -9 0 out of range
[h264 @ 0x557ea6c6a400] decode_slice_header error
[h264 @ 0x557ea6c6a400] concealing 1440 DC, 1440 AC, 1440 MV errors in P frame
[h264 @ 0x557ea6c86760] top block unavailable for requested intra mode -1
[h264 @ 0x557ea6c86760] error while decoding MB 54 9, bytestream 4422
[h264 @ 0x557ea6c86760] concealing 1256 DC, 1256 AC, 1256 MV errors in P frame
[h264 @ 0x557ea6ca2ac0] top block unavailable for requested intra mode -1
[h264 @ 0x557ea6ca2ac0] error while decoding MB 22 9, bytestream 5862
[h264 @ 0x557ea6ca2ac0] concealing 1335 DC, 1335 AC, 1335 MV errors in P frame
[h264 @ 0x557ea6b2f180] deblocking filter parameters 7 -4 out of range
[h264 @ 0x557ea6b2f180] decode_slice_header error
[h264 @ 0x557ea6b2f180] concealing 1430 DC, 1430 AC, 1430 MV errors in P frame
Traceback (most recent call last): File "cap.py", line 11, in  cv.imshow("Video", img)
TypeError: mat is not a numpy array, neither a scalar
[h264 @ 0x557ea6b22a60] concealing 1319 DC, 1319 AC, 1319 MV errors in P frame
[h264 @ 0x557ea6b28740] top block unavailable for requested intra mode -1
[h264 @ 0x557ea6b28740] error while decoding MB 16 9, bytestream 6218
[h264 @ 0x557ea6b28740] concealing 1416 DC, 1416 AC, 1416 MV errors in P frame
[h264 @ 0x557ea6ba5be0] top block unavailable for requested intra mode -1
[h264 @ 0x557ea6ba5be0] error while decoding MB 12 0, bytestream 6854
[h264 @ 0x557ea6ba5be0] top block unavailable for requested intra mode -1
[h264 @ 0x557ea6ba5be0] error while decoding MB 2 9, bytestream 7092

It seems that opencv doesn't recognize the image that I pipe to it with

Traceback (most recent call last): File "cap.py", line 11, in  cv.imshow("Video", img)
TypeError: mat is not a numpy array, neither a scalar

Anyone know where the problem is? Any help is appreciated, thanks in advance.

avcodec/dpx: Check elements in 12bps planar path

$
0
0
avcodec/dpx: Check elements in 12bps planar path Fixes: null pointer dereference
Fixes: 8946/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_DPX_fuzzer-5078915222601728 Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: Carl Eugen Hoyos 
Signed-off-by: Michael Niedermayer 
  • [DH] libavcodec/dpx.c

avformat/movenc: Check that frame_types other than EAC3_FRAME_TYPE_INDEPENDENT have...

$
0
0
avformat/movenc: Check that frame_types other than EAC3_FRAME_TYPE_INDEPENDENT have a supported substream id Fixes: out of array access
Fixes: ffmpeg_bof_1.avi Found-by: Thuan Pham, Marcel Böhme, Andrew Santosa and Alexandru Razvan Caciulescu with AFLSmart
Signed-off-by: Michael Niedermayer 
  • [DH] libavformat/movenc.c

avcodec/ac3_parser: Check init_get_bits8() for failure

$
0
0
avcodec/ac3_parser: Check init_get_bits8() for failure Fixes: null pointer dereference
Fixes: ffmpeg_crash_6.avi Found-by: Thuan Pham, Marcel Böhme, Andrew Santosa and Alexandru Razvan Caciulescu with AFLSmart
Reviewed-by: Paul B Mahol 
Signed-off-by: Michael Niedermayer 
  • [DH] libavcodec/ac3_parser.c

avformat/movenc: Do not pass AVCodecParameters in avpriv_request_sample

$
0
0
avformat/movenc: Do not pass AVCodecParameters in avpriv_request_sample Fixes: out of array read
Fixes: ffmpeg_crash_8.avi Found-by: Thuan Pham, Marcel Böhme, Andrew Santosa and Alexandru Razvan Caciulescu with AFLSmart
Signed-off-by: Michael Niedermayer 
  • [DH] libavformat/movenc.c

avcodec/mpeg4videodec: Check read profile before setting it

$
0
0
avcodec/mpeg4videodec: Check read profile before setting it Fixes: null pointer dereference
Fixes: ffmpeg_crash_7.avi Found-by: Thuan Pham, Marcel Böhme, Andrew Santosa and Alexandru Razvan Caciulescu with AFLSmart
Signed-off-by: Michael Niedermayer 
  • [DH] libavcodec/mpeg4videodec.c

avformat/movenc: Use mov->fc consistently for av_log()

$
0
0
avformat/movenc: Use mov->fc consistently for av_log() Signed-off-by: Michael Niedermayer 
  • [DH] libavformat/movenc.c

h264_slice: Fix return of incomplete frames from decoder

$
0
0
h264_slice: Fix return of incomplete frames from decoder When not using libavformat for demuxing, AVCodecContext.has_b_frames
gets set too late causing the recovery frame heuristic in h264_refs to
incorrectly flag an early frame as recovered. This patch sets has_b_frames earlier to prevent improperly flagging the
frame as recovered. Signed-off-by: Michael Niedermayer 
  • [DH] libavcodec/h264_slice.c

Evolution #4102: Ordre des inclures dans cache/charger_plugins_options.php

$
0
0

Donc tu proposes que les constantes, avant d'être définies par un define, soient possiblement pré-réservées par les plugins, via le tableau $flux où les éventuelles pré-réservations peuvent être surchargées.

Mais est-ce que ça couvre tous les cas ? Ou bien est-ce que c'est suffisant ?

Je suis pas certain. Par exemple, on pourrait vouloir surcharger si la préréservation de valeur a été faite par tel plugin, mais pas par tel autre.

Il faudrait donc 2 entrées pour chaque define préréservé : sa valeur + l'origine de sa valeur.

Ou bien un peu plus simple : sa valeur + un indice de priorité. Une préréservation sans priorité ou avec 0 de priorité serait toujours surchargée au besoin. Une préréservation avec 100 de priorité pourrait seulement être surchargée par les plugins estimant leur surcharge encore plus prioritaire, 1000 par exemple.


Combining audio file and image with ffmpeg in python

$
0
0

tl;dr: how to use a bash ffmpeg command in python

So I'm trying to take one JPEG image and an audio file as input and generate a video file of the same duration as the audio file (by stretching the still image for the whole duration).

So, I found these: https://superuser.com/questions/1041816/combine-one-image-one-audio-file-to-make-one-video-using-ffmpeg

So, I now have the code for the merging: ffmpeg -loop 1 -i image.jpg -i audio.wav -c:v libx264 -tune stillimage -c:a aac -b:a 192k -pix_fmt yuv420p -shortest out.mp4

Then I want to use that in python but unable to figure out how to port this to ffmpeg-python or ffpy.

I found this: Combining an audio file with video file in python

So, I tried the same thing as him:

cmd = 'ffmpeg -loop 1 -i image.jpg -i message.mp3 -c:v libx264 -tune stillimage -c:a aac -b:a 192k -pix_fmt yuv420p -shortest out.mp4'
subprocess.check_output(cmd, shell=True) subprocess.call(cmd, shell=True)

But I got "returned non-zero exit status 1". So what did I do wrong?

Evolution #4102: Ordre des inclures dans cache/charger_plugins_options.php

$
0
0

jluc - a écrit :

Donc tu proposes que les constantes, avant d'être définies par un define, soient possiblement pré-réservées par les plugins, via le tableau $flux où les éventuelles pré-réservations peuvent être surchargées.

Oui en gros c'est ça. On obtient une gestion plus souple des contantes. Le pipeline peut servir à trier, ou écraser certaines valeurs, il est (re)calculé lors de la visite de la page admin_plugin&actualise=1 et son résultat est inséré au début du fichier charger_options.php donc avant les éventuels define() livrés par defaut par les plugins. On retrouve bien la logique de surcharge emblématique de SPIP (le plus bas dans la chaîne a possiblement le dernier mot) et la magie continue.

Mais est-ce que ça couvre tous les cas ? Ou bien est-ce que c'est suffisant ?

Ma solution a quelques limitations qu'il faut garder à l'esprit en effet.
Le calcul du pipeline ne se fait pas à chaque hit, et en conséquence, certaines valeurs dont on souhaiterait modifier l'affectation en fonction du contexte de la requête (ex: ip / test_espace_prive, ...) ne seraient alors pas pertinentes. Mais cela reste des cas rares, qui déjà eux-même sortent de la simple déclaration de constante de configuration.

Ou bien un peu plus simple : sa valeur + un indice de priorité.

Pour moi, pas besoin de gestion explicite de priorité. Si on veut vraiment forcer une valeur en toutes circonstances, il reste mes_options.php.

ffmpeg not working if run from py2app

$
0
0

I'm trying to build a simple app that concats 2 mp4 files. It works fine if I run it from the command line, but if I run it from py2app app it doesn't work. If I run the app from within the py2app in the console (eg 'dist/addTag.app/Contents/MacOS/addTag'), it works fine. It only doesn't work if i run the app by double clicking on it. Any ideas? code below

#! /usr/bin/python
import argparse
import ffmpeg
import os
import shutil
import sys
from Tkinter import *
import time fields = 'Input Video', 'Tag Video', 'Output Name' def fetch(entries, bu, lb, rt): bu['state'] = 'disabled' lb['text'] = 'working' rt.update() ffmpeg.concat(ffmpeg.input(entries[0][1].get()), ffmpeg.input(entries[1][1].get())).output(os.path.expanduser("~/desktop/")+entries[2][1].get()).run() bu['state'] = 'normal' lb['text'] = 'Ready' rt.update() def makeform(root, fields): entries = [] for field in fields: row = Frame(root) lab = Label(row, width=15, text=field, anchor='w') ent = Entry(row) row.pack(side=TOP, fill=X, padx=5, pady=5) lab.pack(side=LEFT) ent.pack(side=RIGHT, expand=YES, fill=X) entries.append((field, ent)) return entries if __name__ == '__main__': root = Tk() root.title("Video Maker") ents = makeform(root, fields) root.bind('', (lambda event, e=ents: fetch(e))) label = Label(root, text="Ready") label.pack(side=LEFT) b1 = Button(root, text='Make Video', command=(lambda e=ents: fetch(e, b1, label, root))) b1.pack(side=LEFT, padx=5, pady=5) b2 = Button(root, text='Quit', command=root.quit) b2.pack(side=LEFT, padx=5, pady=5) root.mainloop()

Add PNG overlay to multi output HLS M3U8 with FFmpeg

$
0
0

I've been banging my head for 2 days now. I can currently output 3 M3U8 "HLS" outputs resized but now I need to add an overlay to to each output. The overlay image would need to be resized as well. My take on this is that the image should be applied to the source and then the 3 outputs can be generated. I have read that -vf cannot be used since there are 2 inputs. Here is what I am currently using which works. How could I add an image overlay?

ffmpeg -hide_banner -y -i input.mov^ -vf scale=w=640:h=360:force_original_aspect_ratio=decrease -c:v h264 -profile:v main -crf 20 -sc_threshold 0 -g 72 -keyint_min 72 -hls_time 4 -hls_playlist_type vod -b:v 800k -maxrate 856k -bufsize 1200k -b:a 96k -hls_flags single_file^ 360p.m3u8
-vf scale=w=1280:h=720:force_original_aspect_ratio=decrease -c:a aac -ar 48000 -c:v h264 -profile:v main -crf 20 -sc_threshold 0 -g 72 -keyint_min 72 -hls_time 4 -hls_playlist_type vod -b:v 2800k -maxrate 2996k -bufsize 4200k -b:a 128k -hls_flags single_file 720p.m3u8^ -vf scale=w=1920:h=1080:force_original_aspect_ratio=decrease -c:a aac -ar 48000 -c:v h264 -profile:v main -crf 20 -sc_threshold 0 -g 72 -keyint_min 72 -hls_time 4 -hls_playlist_type vod -b:v 5000k -maxrate 5350k -bufsize 7500k -b:a 192k -hls_flags single_file 1080p.m3u8

Any advice would be greatly appreciated.

Thanks in advance.

getting AVFrame pts value

$
0
0
  1. I have a AVStream of Video from FormatContext. [ avstream ]
  2. Read Packet
  3. Decode packet if it is from video.
  4. Now Display the following.

    Packet DTS -> 7200.00 [ from packet ]
    Frame PTS -> -9223372036854775808.000000
    stream time_base -> 0.000011
    Offset -> 0.080000 [ pts * time_base ]
    

code:

double pts = (double) packet.dts;
printf (" dts of packet %f , Frame pts: %f, timeBase %f Offset: %f ", pts, (double)pFrame->pts, av_q2d (avstream->time_base) , pts
*av_q2d(avstream->time_base));
  1. Why is Frame pts negative ? Is it expected behavior ?
  2. Do I need to consider Frame pts from packet DTS [ ie: frame pts = packet dts ]
Viewing all 117701 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>