Quantcast
Channel: MediaSPIP
Viewing all 117701 articles
Browse latest View live

checkasm: af_afir: Use a dynamic tolerance depending on values

$
0
0
checkasm: af_afir: Use a dynamic tolerance depending on values As the values generated by av_bmg_get can be arbitrarily large
(only the stddev is specified), we can't use a fixed tolerance.
Calculate a dynamic tolerance (like in float_dsp from 38f966b2222db),
based on the individual steps of the calculation. This fixes running this test with certain seeds, when built with
clang for mingw/x86_32. Signed-off-by: Martin Storsjö
  • [DH] tests/checkasm/af_afir.c

avformat: Don't free old extradata before ff_alloc/get_extradata

$
0
0
avformat: Don't free old extradata before ff_alloc/get_extradata These functions already free it themselves before they allocate the new
extradata. Signed-off-by: Andreas Rheinhardt 
Signed-off-by: Michael Niedermayer 
  • [DH] libavformat/avidec.c
  • [DH] libavformat/cafdec.c
  • [DH] libavformat/concatdec.c
  • [DH] libavformat/flic.c
  • [DH] libavformat/flvdec.c
  • [DH] libavformat/matroskaenc.c
  • [DH] libavformat/mov.c
  • [DH] libavformat/nuv.c
  • [DH] libavformat/oggparseogm.c
  • [DH] libavformat/oggparseopus.c
  • [DH] libavformat/riffdec.c
  • [DH] libavformat/rtpdec_latm.c
  • [DH] libavformat/rtpdec_mpeg4.c
  • [DH] libavformat/rtpdec_qdm2.c
  • [DH] libavformat/rtpdec_svq3.c
  • [DH] libavformat/utils.c
  • [DH] libavformat/wavdec.c
  • [DH] libavformat/xmv.c

Does FFMPEG download the video on server to generate screen shot from S3 video url?

Raspberry Pi 4 live streaming with ffmpeg

$
0
0

So speedify created a blog post and youtube video about making an IRL streaming backpack using the Elgato Cam Link 4k, Raspberry Pi 4, and ffmpeg.

They gave pretty detailed instructions, and included downloads to prebuilt scripts/commands to get it all running once put together. Blog post: https://speedify.com/blog/how-to/build-irl-streaming-backpack-complete-guide/

ffmpeg command from post:

ffmpeg_command = “/home/pi/bin/ffmpeg -nostdin -re -f v4l2 -s ‘1280×720' -framerate 24 -i /dev/video0 -f alsa -ac 2 -i hw:CARD=Link,DEV=0 -vcodec libx264 -framerate 24 -rtbufsize 1500k -s 1280×720 -preset ultrafast -pix_fmt yuv420p -crf 17 -force_key_frames ‘expr:gte(t,n_forced*2)' -minrate 850k -maxrate 1000k -b:v 1000k -bufsize 1000k -acodec libmp3lame -rtbufsize 1500k -b 96k -ar 44100 -f flv – | ffmpeg -f flv -i – -c copy -f flv -drop_pkts_on_overflow 1 -attempt_recovery 1 -recovery_wait_time 1 rtmp://live.twitch.tv/app/live_” + streamKey + “‘\n”

I replaced -i hw:card=link,dev=0 in that command with -i hw:2,0 because -i hw:card=link,dev=0 gave me "file does not exist" errors in the log. "streamkey" is filled with the appropriate key for my twitch.

Github Resources + Instructions used: https://github.com/speedify/rpi-streaming-experiment

I'm using all the exact same hardware as outlined in the post, and have gotten everything installed correctly as far as I can tell. But when I go to run the ffmpeg command, it seems like nothing actually gets sent over to twitch correctly.

The log after trying to run it looks like this. If anybody has any insight as to what may be going wrong, it would be greatly appreciated.

Starting ffmpeg
ffmpeg version N-95970-gd5274f8 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 8 (Raspbian 8.3.0-6+rpi1) configuration: --prefix=/home/pi/ffmpeg_build --pkg-config-flags=--static --extra-cflags=-I/home/pi/ffmpeg_build/include --extra-ldflags=-L/home/pi/ffmpeg_build/lib --extra-libs='-lpthread -lm' --bindir=/home/pi/bin --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-nonfree libavutil 56. 36.101 / 56. 36.101 libavcodec 58. 64.101 / 58. 64.101 ffmpeg version N-95970-gd5274f8 libavformat 58. 35.101 / 58. 35.101 Copyright (c) 2000-2019 the FFmpeg developers libavdevice 58. 9.101 / 58. 9.101 libavfilter 7. 67.100 / 7. 67.100 built with gcc 8 (Raspbian 8.3.0-6+rpi1) libswscale 5. 6.100 / 5. 6.100 configuration: --prefix=/home/pi/ffmpeg_build --pkg-config-flags=--static --extra-cflags=-I/home/pi/ffmpeg_build/include --extra-ldflags=-L/home/pi/ffmpeg_build/lib --extra-libs='-lpthread -lm' --bindir=/home/pi/bin --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-nonfree libswresample 3. 6.100 / 3. 6.100 libpostproc 55. 6.100 / 55. 6.100 libavutil 56. 36.101 / 56. 36.101 libavcodec 58. 64.101 / 58. 64.101 libavformat 58. 35.101 / 58. 35.101 libavdevice 58. 9.101 / 58. 9.101 libavfilter 7. 67.100 / 7. 67.100 libswscale 5. 6.100 / 5. 6.100 libswresample 3. 6.100 / 3. 6.100 libpostproc 55. 6.100 / 55. 6.100 [video4linux2,v4l2 @ 0x2aac5e0] The V4L2 driver changed the video from 1280x720 to 1920x1080 [video4linux2,v4l2 @ 0x2aac5e0] The driver changed the time per frame from 1/24 to 117/7013 [video4linux2,v4l2 @ 0x2aac5e0] Dequeued v4l2 buffer contains 4147200 bytes, but 3110400 were expected. Flags: 0x00012001. Input #0, video4linux2,v4l2, from '/dev/video0': Duration: N/A, start: 4683.201589, bitrate: 1491503 kb/s Stream #0:0: Video: rawvideo (I420 / 0x30323449), yuv420p, 1920x1080, 1491503 kb/s, 59.94 fps, 59.94 tbr, 1000k tbn, 1000k tbc Guessed Channel Layout for Input Stream #1.0 : stereo Input #1, alsa, from 'hw:2,0': Duration: N/A, start: 1576099663.557438, bitrate: 1536 kb/s Stream #1:0: Audio: pcm_s16le, 48000 Hz, stereo, s16, 1536 kb/s Please use -b:a or -b:v, -b is ambiguous Stream mapping: Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264)) Stream #1:0 -> #0:1 (pcm_s16le (native) -> mp3 (libmp3lame)) [video4linux2,v4l2 @ 0x2aac5e0] Dequeued v4l2 buffer contains 4147200 bytes, but 3110400 were expected. Flags: 0x00012001. Last message repeated 9 times
[video4linux2,v4l2 @ 0x2aac5e0] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8) [video4linux2,v4l2 @ 0x2aac5e0] Dequeued v4l2 buffer contains 4147200 bytes, but 3110400 were expected. Flags: 0x00012001. Last message repeated 28 times terminated script pipe:: could not find codec parameters Exiting normally, received signal 15. Last message repeated 15 times [alsa @ 0x2aaf2c0] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8) Finishing stream 0:0 without any data written to it. [libx264 @ 0x2abee40] using cpu capabilities: ARMv6 NEON [libx264 @ 0x2abee40] profile Constrained Baseline, level 3.2, 4:2:0, 8-bit [libx264 @ 0x2abee40] 264 - core 158 - H.264/MPEG-4 AVC codec - Copyleft 2003-2019 - http://www.videolan.org/x264.html - options: cabac=0 ref=1 deblock=0:0:0 analyse=0:0 me=dia subme=0 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0 keyint=250 keyint_min=25 scenecut=0 intra_refresh=0 rc_lookahead=0 rc=crf mbtree=0 crf=17.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 vbv_maxrate=1000 vbv_bufsize=1000 crf_max=0.0 nal_hrd=none filler=0 ip_ratio=1.40 aq=0 Finishing stream 0:1 without any data written to it. Output #0, flv, to 'pipe:': Metadata: encoder : Lavf58.35.101 Stream #0:0: Video: h264 (libx264) ([7][0][0][0] / 0x0007), yuv420p, 1280x720, q=-1--1, 96 kb/s, 59.94 fps, 1k tbn, 59.94 tbc Metadata: encoder : Lavc58.64.101 libx264 Side data: cpb: bitrate max/min/avg: 1000000/0/96000 buffer size: 1000000 vbv_delay: N/A Stream #0:1: Audio: mp3 (libmp3lame) ([2][0][0][0] / 0x0002), 44100 Hz, stereo, s16p Metadata: encoder : Lavc58.64.101 libmp3lame [flv @ 0x2abda90] Failed to update header with correct duration. [flv @ 0x2abda90] Failed to update header with correct filesize. Error writing trailer of pipe:: Broken pipe frame= 0 fps=0.0 q=0.0 Lsize= 0kB time=00:00:00.00 bitrate=N/A speed= 0x
video:0kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown Exiting normally, received signal 15. 

This message repeats until script is terminated with the Circuit Express button. For length, many instances of this line were cut out.

[video4linux2,v4l2 @ 0x2aac5e0] Dequeued v4l2 buffer contains 4147200
bytes, but 3110400 were expected. Flags: 0x00012001.
Last message repeated xx times

Output from v4l2-ctl --list-formats-ext

ioctl: VIDIOC_ENUM_FMT Type: Video Capture [0]: 'YUYV' (YUYV 4:2:2) Size: Discrete 1920x1080 Interval: Discrete 0.017s (59.940 fps) [1]: 'NV12' (Y/CbCr 4:2:0) Size: Discrete 1920x1080 Interval: Discrete 0.017s (59.940 fps) [2]: 'YU12' (Planar YUV 4:2:0) Size: Discrete 1920x1080 Interval: Discrete 0.017s (59.940 fps)

Log output after ffmpeg command modification.

Starting ffmpeg
ffmpeg version N-95970-gd5274f8 Copyright (c) 2000-2019 the FFmpeg developers built with gcc 8 (Raspbian 8.3.0-6+rpi1) configuration: --prefix=/home/pi/ffmpeg_build --pkg-config-flags=--static --extra-cflags=-I/home/pi/ffmpeg_build/include --extra-ldflags=-L/home/pi/ffmpeg_build/lib --extra-libs='-lpthread -lm' --bindir=/home/pi/bin --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-nonfree libavutil 56. 36.101 / 56. 36.101 libavcodec 58. 64.101 / 58. 64.101 libavformat 58. 35.101 / 58. 35.101 libavdevice 58. 9.101 / 58. 9.101 libavfilter 7. 67.100 / 7. 67.100 libswscale 5. 6.100 / 5. 6.100 libswresample 3. 6.100 / 3. 6.100 libpostproc 55. 6.100 / 55. 6.100
terminated script
Input #0, video4linux2,v4l2, from '/dev/video0': Duration: N/A, bitrate: 1491503 kb/s Stream #0:0: Video: rawvideo (I420 / 0x30323449), yuv420p, 1920x1080, 1491503 kb/s, 59.94 fps, 59.94 tbr, 1000k tbn, 1000k tbc
Guessed Channel Layout for Input Stream #1.0 : stereo
Input #1, alsa, from 'hw:1,0': Duration: N/A, bitrate: 1536 kb/s Stream #1:0: Audio: pcm_s16le, 48000 Hz, stereo, s16, 1536 kb/s
[rtmp @ 0x2605cd0] Cannot open connection tcp://live.twitch.tv:1935
rtmp://live.twitch.tv/app/live_142979291_K2ys11kDmH2sSLfp9pSDFvCU1n9ejq: Immediate exit requested
Exiting normally, received signal 15.
-- https://speedify.com/blog/how-to/build-irl-streaming-backpack-complete-guide/, https://github.com/speedify/rpi-streaming-experiment

configure: Check for the SetDllDirectory and GetModuleHandle functions

$
0
0
configure: Check for the SetDllDirectory and GetModuleHandle functions These functions aren't available when building for the restricted
UWP/WinRT/WinStore API subsets. Normally when building in this mode, one is probably only building
the libraries, but being able to build ffmpeg.exe still is useful
(and a ffmpeg.exe targeting these API subsets still can be run
e.g. in wine, for testing). Signed-off-by: Martin Storsjö
  • [DH] configure
  • [DH] fftools/cmdutils.c

Webcam streaming from Mac using FFmpeg

$
0
0

I want to stream my webcam from Mac using FFmpeg.

First I checked the supported devices using ffmpeg -f avfoundation -list_devices true -i ""

Output:

[AVFoundation input device @ 0x7fdf1bd03000] AVFoundation video devices:
[AVFoundation input device @ 0x7fdf1bd03000] [0] USB 2.0 Camera #2
[AVFoundation input device @ 0x7fdf1bd03000] [1] FaceTime HD Camera
[AVFoundation input device @ 0x7fdf1bd03000] [2] Capture screen 0
[AVFoundation input device @ 0x7fdf1bd03000] [3] Capture screen 1
[AVFoundation input device @ 0x7fdf1bd03000] AVFoundation audio devices:
[AVFoundation input device @ 0x7fdf1bd03000] [0] Built-in Microphone

The device[0] is the webcam I want to use.


Then I tried to capture the webcam using ffmpeg -f avfoundation -i "0" out.mpg

Output:

[avfoundation @ 0x7fe7f3810600] Selected framerate (29.970030) is not supported by the device
[avfoundation @ 0x7fe7f3810600] Supported modes:
[avfoundation @ 0x7fe7f3810600] 320x240@[120.101366 120.101366]fps
[avfoundation @ 0x7fe7f3810600] 640x480@[120.101366 120.101366]fps
[avfoundation @ 0x7fe7f3810600] 800x600@[60.000240 60.000240]fps
[avfoundation @ 0x7fe7f3810600] 1024x768@[30.000030 30.000030]fps
[avfoundation @ 0x7fe7f3810600] 1280x720@[60.000240 60.000240]fps
[avfoundation @ 0x7fe7f3810600] 1280x1024@[30.000030 30.000030]fps
[avfoundation @ 0x7fe7f3810600] 1920x1080@[30.000030 30.000030]fps
[avfoundation @ 0x7fe7f3810600] 320x240@[30.000030 30.000030]fps
[avfoundation @ 0x7fe7f3810600] 640x480@[30.000030 30.000030]fps
[avfoundation @ 0x7fe7f3810600] 800x600@[20.000000 20.000000]fps
[avfoundation @ 0x7fe7f3810600] 1024x768@[6.000002 6.000002]fps
0: Input/output error

After that, I tried stream this webcam from my Mac using ffmpeg -f avfoundation -framerate 30 -i "0" -f mpeg1video -b 200k -r 30 -vf scale=1920:1080 http://127.0.0.1:8082/

Output:

[avfoundation @ 0x7f8515012800] An error occurred: The activeVideoMinFrameDuration passed is not supported by the device. Use -activeFormat.videoSupportedFrameRateRanges to discover valid ranges.0: Input/output error

I cannot capture or stream this webcam. However when I used the Facetime camera instead of this webcam, everything was OK. I've been searching for this problem for a few days, but still cannot fix it. Does anyone have experience with webcam and FFmpeg on Mac?

ffmpeg encoded video has video/audio sync delay when uploading to Facebook & WhatsApp

$
0
0
"fluent-ffmpeg": "^2.1.2", "ffmpeg": "^0.0.4", node : 8

Code to reproduce


let command = ffmpeg() .input(tempFilePath) .input(watermarkFilePath) .complexFilter([ "[0:v][1:v]overlay=W-w-20:H-h-20" ]) .videoBitrate(2500) .videoCodec('libx264') .audioCodec('aac') .format('mp4') .output(targetTempFilePath) 

When applying the ffmpeg encoding command on the attached video, it plays fine on a local device - the issue however is when uploading to Facebook/WhatsApp the audio/video becomes out of sync

Any ideas on what i need to change in terms of the video/audio settings so that the audio + video are in sync, even when uploaded to the various social networks?

Here's a link to the 3 video files (original, post ffmpeg, post whatsapp upload that includes delay) if you want to get a better idea!

https://wetransfer.com/downloads/445dfaf0f323a73c56201b818dc0267b20191213052112/24e635

Thank you and appreciate any help!!

-- https://wetransfer.com/downloads/445dfaf0f323a73c56201b818dc0267b20191213052112/24e635

How can I check the encoders used for a video

$
0
0

I am making a python script that uses ffmpeg and moviepy to convert videos to mp4. I want to make an if statement that checks if the input file needs to be reencoded or just rewrapped.(if the input file is aac and h.264 it does not need to be reencoded.) Is there a simple way I can grab that file info?


How to install ffmpeg and an app together on a Mac?

$
0
0

I have an electron app built and packaged for macOS in a .app file. The app requires ffmpeg to be installed on the end-user's computer to be used.

Currently, I've had to manually install ffmpeg on each end-user's computer to run the app.

I want to distribute the app online with an easy installer for both ffmpeg and the app. I've seen .dmg files which allows you to drag the .app into the applications folder easily, but the ffmpeg dependency is still absent in the installation process.

How can I install ffmpeg and the app together on a mac?

Perhaps including the ffmpeg build in the .app content is a solution as well. This may not be possible though because a relevant question mentions there are only abstractions of the ffmpeg CLI instead of something that can directly use ffmpeg.

-- electron app, ffmpeg

Record multiple RTSP streams into a single file

$
0
0

I need to record 4 RTSP streams into a single file.

Streams must be placed into the video in this way:

 ---------- ---------- | | |
| STREAM 1 | STREAM 2 |
| | |
|----------|----------|
| | |
| STREAM 3 | STREAM 4 |
| | | ---------- ----------

I need to synchronize these live streams with about ~1 second accuracy. This is challenging because streams have variable framerate (FPS).

I have tried ffmpeg but streams are not synchronized. Here is the code:

ffmpeg \ -i "rtsp://IP-ADDRESS/cam/realmonitor?channel=1&subtype=00" \ -i "rtsp://IP-ADDRESS/live?real_stream" \ -i "rtsp://IP-ADDRESS/live?real_stream" \ -i "rtsp://IP-ADDRESS/live?real_stream" \ -filter_complex " \ nullsrc=size=1920x1080 [base]; \ [0:v] scale=960x540 [video0]; \ [1:v] scale=960x540 [video1]; \ [2:v] scale=960x540 [video2]; \ [3:v] scale=960x540 [video3]; \ [base][video0] overlay=shortest=1:x=0:y=0 [tmp1]; \ [tmp1][video1] overlay=shortest=0:x=960:y=0 [tmp2]; \ [tmp2][video2] overlay=shortest=0:x=0:y=540 [tmp3]; \ [tmp3][video3] overlay=shortest=0:x=960:y=540 [v]; \ [0:a]amix=inputs=1[a]" \ -map "[v]" -map "[a]" -c:v h264 videos/test-combine-cams.mp4

Is there a way to combine and synchronize streams in ffmpeg or using other utilities like: vlc, openRTSP, OpenCV?

-- vlc, openRTSP, OpenCV

avcodec/cbs_av1_syntax_template: Check num_y_points

$
0
0
avcodec/cbs_av1_syntax_template: Check num_y_points "It is a requirement of bitstream conformance that num_y_points is less than or equal to 14." Fixes: index 24 out of bounds for type 'uint8_t [24]'
Fixes: 19282/clusterfuzz-testcase-minimized-ffmpeg_BSF_AV1_FRAME_MERGE_fuzzer-5747424845103104 Note, also needs a23dd33606d5 Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Reviewed-by: jamrial
Signed-off-by: Michael Niedermayer 
  • [DH] libavcodec/cbs_av1.h
  • [DH] libavcodec/cbs_av1_syntax_template.c

fate/cbs: use the rawvideo muxer for AV1 tests

$
0
0
fate/cbs: use the rawvideo muxer for AV1 tests The IVF muxer autoinserts the av1_metadata filter unconditionally, which is
not desirable for these tests. Signed-off-by: James Almer 
  • [DH] tests/fate/cbs.mak
  • [DH] tests/ref/fate/cbs-av1-av1-1-b10-23-film_grain-50
  • [DH] tests/ref/fate/cbs-av1-av1-1-b8-02-allintra
  • [DH] tests/ref/fate/cbs-av1-av1-1-b8-03-sizedown
  • [DH] tests/ref/fate/cbs-av1-av1-1-b8-03-sizeup
  • [DH] tests/ref/fate/cbs-av1-av1-1-b8-04-cdfupdate
  • [DH] tests/ref/fate/cbs-av1-av1-1-b8-05-mv
  • [DH] tests/ref/fate/cbs-av1-av1-1-b8-06-mfmv
  • [DH] tests/ref/fate/cbs-av1-av1-1-b8-22-svc-L1T2
  • [DH] tests/ref/fate/cbs-av1-av1-1-b8-22-svc-L2T1
  • [DH] tests/ref/fate/cbs-av1-av1-1-b8-22-svc-L2T2
  • [DH] tests/ref/fate/cbs-av1-av1-1-b8-23-film_grain-50
  • [DH] tests/ref/fate/cbs-av1-decode_model
  • [DH] tests/ref/fate/cbs-av1-frames_refs_short_signaling
  • [DH] tests/ref/fate/cbs-av1-non_uniform_tiling
  • [DH] tests/ref/fate/cbs-av1-seq_hdr_op_param_info
  • [DH] tests/ref/fate/cbs-av1-switch_frame

How to split a video using black frames as markers in ffmpeg?

$
0
0

I want to split a video so that when there are black frames it creates a new file. Is there a way to do this in one command in ffmpeg? I can for the moment detect black frames using:

ffmpeg -i myfile -vf blackdetect=d=2:pix_th=0.00 -f rawvideo -y /NUL

FFMPEG, any video to 16:9

$
0
0

Help me find a command or script that will convert any video to 16:9, h264 and ~2500kbps. I have a server where people upload videos of different quality, size and length. It can be either 640x480 or 1216x2160. Ultimately, I need to get any resolution to 16:9 (with black borders, if needs) and bitrate without visible loss of quality, which will be acceptable for online broadcasting.

I have this command, but it does not check the resolution of the video. And if the video was 560x448 1000kbps and 700mb, then after conversion it will be 1280x720 3000kbps and 1.5gb, that's not right.

ffmpeg -i 5.avi -vcodec libx264 -crf 23 -preset veryfast -vf scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1 -tune zerolatency highoutput.mp4

avfilter/vf_datascope: add decimal output

$
0
0
avfilter/vf_datascope: add decimal output
  • [DH] doc/filters.texi
  • [DH] libavfilter/vf_datascope.c

Python: How to decode a mp3 chunk into PCM samples?

$
0
0

I'm trying to catch chunks of an mp3 webstream and decoding them into PCM samples for signal processing. I tried to catch the audio via requests and io.BytesIO to save the data as .wav file.

I have to convert the mp3 data to wav data, but I don't know how. (My goal is not to record a .wav file, i am just doing this to test the algorithm.)

I found the pymedia lib, but it is very old (last commit in 2006), using python 2.7 and for me not installable.

Maybe it is possible with ffmpeg-python, but I have just seen examples using files as input and output.

Here's my code:

import requests
import io
import soundfile as sf
import struct
import wave
import numpy as np def main(): stream_url = r'http://dg-wdr-http-dus-dtag-cdn.cast.addradio.de/wdr/1live/diggi/mp3/128/stream.mp3' r = requests.get(stream_url, stream=True) sample_array = [] try: for block in r.iter_content(1024): data, samplerate = sf.read(io.BytesIO(block), format="RAW", channels=2, samplerate=44100, subtype='FLOAT', dtype='float32') sample_array = np.append(sample_array, data) except KeyboardInterrupt: print("...saving") obj = wave.open('sounds/stream1.wav', 'w') obj.setnchannels(1) # mono obj.setsampwidth(2) # bytes obj.setframerate(44100) data_max = np.nanmax(abs(sample_array)) # fill WAV with samples from sample_array for sample in sample_array: if (np.isnan(sample) or np.isnan(32760 * sample / data_max)) is True: continue try: value = int(32760 * sample / data_max) # normalization INT16 except ValueError: value = 1 finally: data = struct.pack('

Do you have an idea how to handle this problem?

how to add overlay at the end of video without knowing time duration of video file - ffmpeg

$
0
0

I have a bunch of video files to which I add animated overlay at the beginning of video, but I would like to add it again at the end of it t-13 seconds. This is my bash script:

do ffmpeg -i "${f}" -i /app/logo/lower.mov -i /app/logo/logo.png -filter_complex \ "[0:v]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:x=(1280-iw)/2:y=(720-ih)/2:color=black[bg0]; \ [bg0][1:v]overlay=10:10[bg1]; \ [bg1][2:v]overlay=10:10,drawtext=fontfile=/app/logo/Courier Prime.ttf:text=$(basename "${f}" | cut -f 1 -d '.'): \ fontcolor=white:fontsize=25:x=256:y=h-th-130:alpha=1:enable='between(t,2,15)'" \ -c:v libx264 -crf 21 -preset ultrafast "${f%.*}.mp4" -y
done

Is there any way to do this? I know how to extract the duration with FFProbe, but do not know how to add duration dynamically to a variable and then apply it to code I have.

H.264 encoded stream in MP4 container does not play right in WMP12 Windows 7

$
0
0

I'd like to figure out why my H.264 encoded stream in a MP4 container does not play back correctly in Window Media Player 12 in Windows 7. The file plays well in VLC and other players including WMP from Windows 10 but I'm just wondering why WMP12 from Win 7 does not play it.

It seems that only I-type frames are displayed and blank black frames are shown instead of all P-type frames in between. If I force my device encoder to produce only I-type frames then the file plays back ok in WMP12 (Win 7) but the size of the file increases too much. What are the limits of H.264 decoder in Windows 7?

I'm adding below ffprobe -show_frames video.mp4 (just a few frames but there are no reported errors by ffprobe):

Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2avc1mp41 encoder : Lavf57.83.100 Duration: 00:00:05.51, start: 0.024000, bitrate: 513 kb/s Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 720x480, 510 kb/s, 30.11 fps, 59.94 tbr, 90k tbn, 180k tbc (default) Metadata: handler_name : VideoHandler
[FRAME]
media_type=video
stream_index=0
key_frame=1
pkt_pts=2160
pkt_pts_time=0.024000
pkt_dts=2160
pkt_dts_time=0.024000
best_effort_timestamp=2160
best_effort_timestamp_time=0.024000
pkt_duration=2340
pkt_duration_time=0.026000
pkt_pos=48
pkt_size=2979
width=720
height=480
pix_fmt=yuv420p
sample_aspect_ratio=N/A
pict_type=I
coded_picture_number=0
display_picture_number=0
interlaced_frame=0
top_field_first=0
repeat_pict=0
[/FRAME]
[FRAME]
media_type=video
stream_index=0
key_frame=0
pkt_pts=4500
pkt_pts_time=0.050000
pkt_dts=4500
pkt_dts_time=0.050000
best_effort_timestamp=4500
best_effort_timestamp_time=0.050000
pkt_duration=4320
pkt_duration_time=0.048000
pkt_pos=3027
pkt_size=1662
width=720
height=480
pix_fmt=yuv420p
sample_aspect_ratio=N/A
pict_type=P
coded_picture_number=1
display_picture_number=0
interlaced_frame=0
top_field_first=0
repeat_pict=0
[/FRAME]
[FRAME]
media_type=video
stream_index=0
key_frame=0
pkt_pts=8820
pkt_pts_time=0.098000
pkt_dts=8820
pkt_dts_time=0.098000
best_effort_timestamp=8820
best_effort_timestamp_time=0.098000
pkt_duration=3060
pkt_duration_time=0.034000
pkt_pos=4689
pkt_size=1073
width=720
height=480
pix_fmt=yuv420p
sample_aspect_ratio=N/A
pict_type=P
coded_picture_number=2
display_picture_number=0
interlaced_frame=0
top_field_first=0
repeat_pict=0
[/FRAME]
[FRAME]
media_type=video
stream_index=0
key_frame=0
pkt_pts=11880
pkt_pts_time=0.132000
pkt_dts=11880
pkt_dts_time=0.132000
best_effort_timestamp=11880
best_effort_timestamp_time=0.132000
pkt_duration=2970
pkt_duration_time=0.033000
pkt_pos=5762
pkt_size=829
width=720
height=480
pix_fmt=yuv420p
sample_aspect_ratio=N/A
pict_type=P
coded_picture_number=3
display_picture_number=0
interlaced_frame=0
top_field_first=0
repeat_pict=0
[/FRAME]
[FRAME]
media_type=video
stream_index=0
key_frame=0
pkt_pts=14850
pkt_pts_time=0.165000
pkt_dts=14850
pkt_dts_time=0.165000
best_effort_timestamp=14850
best_effort_timestamp_time=0.165000
pkt_duration=3060
pkt_duration_time=0.034000
pkt_pos=6591
pkt_size=987
width=720
height=480
pix_fmt=yuv420p
sample_aspect_ratio=N/A
pict_type=P
coded_picture_number=4
display_picture_number=0
interlaced_frame=0
top_field_first=0
repeat_pict=0
[/FRAME]
[FRAME]
media_type=video
stream_index=0
key_frame=0
pkt_pts=17910
pkt_pts_time=0.199000
pkt_dts=17910
pkt_dts_time=0.199000
best_effort_timestamp=17910
best_effort_timestamp_time=0.199000
pkt_duration=2970
pkt_duration_time=0.033000
pkt_pos=7578
pkt_size=1344
width=720
height=480
pix_fmt=yuv420p
sample_aspect_ratio=N/A
pict_type=P
coded_picture_number=5
display_picture_number=0
interlaced_frame=0
top_field_first=0
repeat_pict=0
[/FRAME]
[FRAME]
media_type=video
stream_index=0
key_frame=0
pkt_pts=20880
pkt_pts_time=0.232000
pkt_dts=20880
pkt_dts_time=0.232000
best_effort_timestamp=20880
best_effort_timestamp_time=0.232000
pkt_duration=2970
pkt_duration_time=0.033000
pkt_pos=8922
pkt_size=1850
width=720
height=480
pix_fmt=yuv420p
sample_aspect_ratio=N/A
pict_type=P
coded_picture_number=6
display_picture_number=0
interlaced_frame=0
top_field_first=0
repeat_pict=0
[/FRAME]
[FRAME]
media_type=video
stream_index=0
key_frame=0
pkt_pts=23850
pkt_pts_time=0.265000
pkt_dts=23850
pkt_dts_time=0.265000
best_effort_timestamp=23850
best_effort_timestamp_time=0.265000
pkt_duration=3060
pkt_duration_time=0.034000
pkt_pos=10772
pkt_size=1972
width=720
height=480
pix_fmt=yuv420p
sample_aspect_ratio=N/A
pict_type=P
coded_picture_number=7
display_picture_number=0
interlaced_frame=0
top_field_first=0
repeat_pict=0
[/FRAME]

Please let me know if you spot any errors in these frames that might affect playback.

The encoded H.264 stream is created using Freescale libimxvpuapi and the MP4 container using ffmpeg.

More information on the file:

exiftool video.mp4 ExifTool Version Number : 10.10
File Name : video.mp4
Directory : .
File Size : 346 kB
File Modification Date/Time : 2018:10:02 11:27:20+02:00
File Access Date/Time : 2018:10:02 11:41:14+02:00
File Inode Change Date/Time : 2018:10:02 11:27:21+02:00
File Permissions : rw-r--r--
File Type : MP4
File Type Extension : mp4
MIME Type : video/mp4
Major Brand : MP4 Base Media v1 [IS0 14496-12:2003]
Minor Version : 0.2.0
Compatible Brands : isom, iso2, avc1, mp41
Movie Data Size : 351479
Movie Data Offset : 48
Movie Header Version : 0
Create Date : 0000:00:00 00:00:00
Modify Date : 0000:00:00 00:00:00
Time Scale : 1000
Duration : 5.51 s
Preferred Rate : 1
Preferred Volume : 100.00%
Preview Time : 0 s
Preview Duration : 0 s
Poster Time : 0 s
Selection Time : 0 s
Selection Duration : 0 s
Current Time : 0 s
Next Track ID : 2
Track Header Version : 0
Track Create Date : 0000:00:00 00:00:00
Track Modify Date : 0000:00:00 00:00:00
Track ID : 1
Track Duration : 5.51 s
Track Layer : 0
Track Volume : 0.00%
Matrix Structure : 1 0 0 0 1 0 0 0 1
Image Width : 720
Image Height : 480
Media Header Version : 0
Media Create Date : 0000:00:00 00:00:00
Media Modify Date : 0000:00:00 00:00:00
Media Time Scale : 90000
Media Duration : 5.51 s
Media Language Code : und
Handler Description : VideoHandler
Graphics Mode : srcCopy
Op Color : 0 0 0
Compressor ID : avc1
Source Image Width : 720
Source Image Height : 480
X Resolution : 72
Y Resolution : 72
Bit Depth : 24
Video Frame Rate : 30.109
Handler Type : Metadata
Handler Vendor ID : Apple
Encoder : Lavf57.83.100
Avg Bitrate : 510 kbps
Image Size : 720x480
Megapixels : 0.346
Rotation : 0

Many thanks!

Please find a video sample here:

https://www.dropbox.com/s/zehqgqhd8ychy0t/video.mp4?dl=0

-- https://www.dropbox.com/s/zehqgqhd8ychy0t/video.mp4?dl=0

Révision 24475: On change le comportement du critère id_? en gardant la compatibilité avec les ca...

$
0
0

- on ne cherche que les champs id_xxxx et c'est uniquement la fonction de calcul du critère qui fait le travail et on rajoute id_secteur si un id_rubrique existe.
- on ne permet plus l'injection d'autres types de champs : le pipeline lister_champs_selection_conditionnelle devient exclure_id_conditionnel dont l'objectif n'est que de lister les id à exclure du critère.

Python extract wav from video file

$
0
0

Related:

How to extract audio from a video file using python?

Extract audio from video as wav

How to rip the audio from a video?

My question is how could I extract wav audio track from video file, say video.avi? I read many articles and everywhere people suggest to use (from Python) ffmpeg as a subprocess (because there are no reliable python bindings to ffmpeg - the only hope was PyFFmpeg but i found it is unmaintaned now). I don't know if it is right solution and i am looking for good one.
I looked to gstreamer and found it nice but unable to satisfy my needs -- the only way I found to accomplish this from command line looks like

 gst-launch-0.10 playbin2 uri=file://`pwd`/ex.mp4 audio-sink='identity single-segment=true ! audioconvert ! audio/x-raw-int, endianness=(int)1234, signed=(boolean)true, width=(int)16, depth=(int)16, rate=(int)16000, channels=(int)1 ! wavenc ! filesink location=foo.wav' 

But it is not efficient because i need to wait ages while playing video and simultaneously writing to wav file.

ffmpeg is much better:

avconv -i foo.mp4 -ab 160k -ac 1 -ar 16000 -vn ffaudio.wav

But i am unable to launch it from python (not as a command line subprocess). Could you please point me out pros and cons of launching ffmpeg from python as a command line utility ? (I mean using python multiprocessing module or something similar).

And second question.

What is simple way to cut long wav file into pieces so that i don't break any words ? i mean pieces of 10-20 sec length with start and end during the pause in sentences/words ?

i know how to break them on arbitrary pieces:

import wave win= wave.open('ffaudio.wav', 'rb')
wout= wave.open('ffsegment.wav', 'wb') t0, t1= 2418, 2421 # cut audio between 2413, 2422 seconds
s0, s1= int(t0*win.getframerate()), int(t1*win.getframerate())
win.readframes(s0) # discard
frames= win.readframes(s1-s0) wout.setparams(win.getparams())
wout.writeframes(frames) win.close()
wout.close()
Viewing all 117701 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>