Quantcast
Channel: MediaSPIP
Viewing all 118079 articles
Browse latest View live

ffprobe or avprobe not found. Please install one

$
0
0

i want to add tags to mp3 converted by youtube-dl & ffmpeg :

youtube-dl -o '/Output/qpgTC9MDx1o.mp3' qpgTC9MDx1o -f bestaudio --extract-audio --metadata-from-title "%(artist)s - %(title)s" 2>&1

i have this error in output result :

[youtube] qpgTC9MDx1o: Downloading webpage [youtube] qpgTC9MDx1o: Extracting video information [youtube] qpgTC9MDx1o: Downloading js player en_US-vfluGO3jj [youtube] qpgTC9MDx1o: Downloading DASH manifest [download] /var/www/vhosts/mp3-y.com/httpdocs/Mp3_Output/quick-mp3.com-JALAL-EL-HAMDAOUI-2007-ARRASSIATES-VOL2-F1P-9CDoxlQ.mp3 has already been downloaded [download] 100% of 13.43MiB WARNING: qpgTC9MDx1o: writing DASH m4a. Only some players support this container. Install ffmpeg or avconv to fix this automatically. [fromtitle] parsed artist: Maroon 5 [fromtitle] parsed title: Animals ERROR: ffprobe or avprobe not found. Please install one.


MPD MPEG-DASH - Shows only one bitrate

$
0
0

Help. I wont show bitrate.
player.getBitrateInfoListFor("video"); Shows only one bitrate - 454948

manifest.mpd generated by GPAC

player.getBitrateInfoListFor("video"); Shows only one bitrate - 454948

Aws Lambda - ffmpeg outputs distorted mp3 audio

$
0
0

How do I get ffmpeg to output correctly to mp3 on AWS lambda? It's currently rendering a distorting/clipping sound in parts of the audio, but doesn't do it on my local machine.

I am utilizing ffmpeg on AWS Lambda linux using the static build provided by https://www.johnvansickle.com/ffmpeg/ (x86_64 build).

Here's the exact command:

ffmpeg -loglevel verbose -ss 0 -t 30 -y -i /tmp/ick_20180323005225.wav -codec:a libmp3lame -qscale:a 7 /tmp/ick_20180323005225-opa.mp3

Here is the sample file that I used:

http://www.brainybetty.com/FacebookFans/Feb112010/strings.wav

-- https://www.johnvansickle.com/ffmpeg/, http://www.brainybetty.com/FacebookFans/Feb112010/strings.wav

Javascript Discord bot error: "FFMPEG not found" even though ffmpeg-binaries have been added to the package.json

$
0
0

I keep my bot online using Heroku, so installing FFMPEG to my computer wouldn't really help me any.

The issue is that I'm still getting this error despite having the npm ffmpeg-binaries added to my package.json's dependencies. I've also tried using git URLs from Github and the official FFMPEG website, as well as trying to install the git directly using the run-command option in Heroku's application page, but I keep getting this same error. Am I missing something?

Ffmpeg cannot use the text having white space, while adding a text over a video

$
0
0

I am using ffmpeg to add text over a video, the text can be more than one also. The problem i am facing is with the text having white spaces, ffmpeg is showing the invalid argument.

My command is like this:-

ffmpeg -i input -filter_complex drawtext=fontfile=fontpath:fontcolor=0x000000ff:fontsize=121.26316137279886:shadowcolor=0xffffffff:shadowx=0:shadowy=0:bordercolor=0xffffffff:borderw=0:box=1:boxcolor=0x00000000:boxborderw=30:x=284.73258578742804:y=703.5501114572116:enable='between(t,0,9)':text='hello hello' -c:v libx264 -preset ultrafast output

Error i am facing:

ffmpeg: Unable to find a suitable output format for 'hello''

ffmpeg: hello': Invalid argument

If i am entering text without spaces, it is working perfectly fine, but things are not good with text having spaces. I am stuck at this point from last 2 days, If anybody can help me, will be very helpful!

Programmatically convert flv to mp4 on iOS

$
0
0

In our iOS app, we will receive flv**(Container)** file having video and audio streaming something like this

Input #0, flv, from 'test.flv': Metadata: streamName : flvLiveStream encoder : Lavf55.12.100 Duration: 00:00:48.00, start: 66064.401000, bitrate: 632 kb/s Stream #0:0, 41, 1/1000: **Video: h264 (Baseline)**, 1 reference frame, yuv420p(progressive, left), 1280x720, 0/1, 15 fps, 1k tbr, 1k tbn Stream #0:1, 14, 1/1000: **Audio: pcm_alaw,** 8000 Hz, mono, s16, 64 kb/s

and this needs to be converted to mp4 container and format as well, I am trying using ffmpeg, I believe thats only the way , using transcoding.c file but failed at this stage

Impossible to convert between the formats supported by the filter 'in' and the filter 'auto_scaler_0'

I am trying to learn from OSX command like ffmpeg -I test.flv test.mp4 ,

is that feasible to port to iOS will it work in all different scenarios,

In Summary

--- What is the best possible way to convert flv to mp4 on the iOS device where video will be in h264 and audio will be in swf codec ?

FFMPEG. Combine rawAudio and rawVideo from named pipe

$
0
0

I have 2 NamedPipeServerStreams. 1 for Audio (Stereo Mix). I am using WavIn from NAudio and 1 for Video (Screen Capture). Just converting BitBlt screen to Byte Array.

I'm able to create an mp4 video for the rawvideo and changing code, a seperate wav for the rawaudio but am unable to combine/merge both into an mp4 video.

These are the ffmpeg commands i'm using.

To create the audio wav file:

string args = @"-f s32le -channels 2 -sample_rate 44100 -i \\.\pipe\ffpipea -c copy output.wav";

Audio plays really fast but it at least gets it.

To create the video mp4 file:

string inputArgs = @"-framerate 8 -f rawvideo -pix_fmt bgr24 -video_size 1920x1080 -i \\.\pipe\ffpipev";
string outputArgs = "-vcodec libx264 -crf 23 -pix_fmt yuv420p -preset ultrafast -r 8 output.mp4";

My attempt to combine/merge both:

string args = @"-framerate 8 -f rawvideo -pix_fmt bgr24 -video_size 1920x1080 -i \\.\pipe\ffpipev " + @"-f s32le -channels 2 -sample_rate 44100 -i \\.\pipe\ffpipea " + "-map 0:0 -map 1:0 -vcodec libx264 -crf 23 -pix_fmt yuv420p -preset ultrafast -r 8 -c:a copy output.mp4";

Depending on what i change in the args, either the WaitForConnection() never fires or the audiopipe breaks with pip is broken.

Please let me know if i can provide further information. Any help greatly appreciated.

Error in FFmpeg build with cmake

$
0
0

When I build ffmpeg with cmake, I found some errors:

libavfilter/avf_showcqt.c:147: error: undefined reference to 'av_fft_end'
libavfilter/avf_showcqt.c:718: error: undefined reference to 'avpriv_vga16_font'
libavfilter/avf_showcqt.c:1383: error: undefined reference to 'av_fft_init'
libavfilter/avf_showcqt.c:1151: error: undefined reference to 'av_fft_permute'
libavfilter/avf_showcqt.c:1152: error: undefined reference to 'av_fft_calc'
libavfilter/avf_showfreqs.c:183: error: undefined reference to 'av_audio_fifo_free'
libavfilter/avf_showfreqs.c:184: error: undefined reference to 'av_fft_end'
libavfilter/avf_showfreqs.c:185: error: undefined reference to 'av_fft_init'

And this is my link configuration:

lib_avformat
lib_avcodec
lib_swscale
lib_avutil
lib_avfilter
lib_swresample
lib_postproc
lib_avdevice
lib_mp3lame
lib_fdk-aac
lib_x264

Send stream from hdhomerun to AWS EC2 instance

$
0
0

I'm trying something odd to get fun. Send stream from hdhomerun to AWS EC2 instance.

First aproach. Read stream with ffmpeg (tested locally successfully)

hdhomerun_config discover sya my device have ip 192.168.1.200 so I open port on my router in this way

:5005 -> 192.168.1.200:80
:5004 -> 192.168.1.200:5004

w3m and telnet say everything is ok.

But...

from EC2 instance I do

/usr/bin/ffmpeg -y -i 'http://:5004/auto/v5057?transcode=internet240' -t 12 -vn -acodec pcm_s16le -ar 16000 -ac 1 '/tmp/test.wav'

In tuner1 I can see...

Virtual Channel none
Frequency 698.000 MHz
Program Number 186
Modulation Lock t8qam64
Signal Strength 89%
Signal Quality 100%
Symbol Quality 100%
Streaming Rate none
Resource Lock 

In hdhomerun system logs...

19700102-10:27:25 Tuner: tuner0 tuning 5057 Telecinco (t8qam64:698MHz-186)
19700102-10:27:25 Tuner: tuner0 streaming http to :34124

Everything seems ok but ffmpeg don't get any data.

Seccond aproach. Send stream to wowza

I have a wowza server running in EC2 instance

From a linux box at my home I do.

root# /usr/bin/hdhomerun_config 1250D7B2 scan /tuner1 scan.log
root# /usr/bin/hdhomerun_config 1250D7B2 set /tuner1/channel auto:651000000
root# /usr/bin/hdhomerun_config 1250D7B2 get /tuner1/streaminfo
root# /usr/bin/hdhomerun_config 1250D7B2 get /tuner1/program 190
root# /usr/bin/hdhomerun_config 1250D7B2 set /tuner1/target rtp://:1935/TEST/hdhr
root# echo $?
root# 0

Ends without error buy doesn't work This was a long shot but ....

Any ideas to do this.

Thanks !!!

-- hdhomerun

use ffmpeg to set start_time equal in audio and video elementary streams

$
0
0

I am using ffmpeg tool for offline transcoding of some input files to MPEG-TS format. I use ffprobe to analyze the output. I need the output video to have equal values for start_time for both video and audio elementary streams. This is necessary for streaming by Perception streamer server. My desired output is like this:

I use this profile for transcoding:

-ss 0 -y -vcodec libx264 -vb 3404k -acodec libfdk_aac -profile:a aac_he -strict experimental -ar 48k -f adts -ab 96k -r 25 -g 50 -force_key_frames 'expr:gte(t,n_forced*2)' -x264-params keyint=50:min-keyint=50:scenecut=-1:force-cfr=1:nal-hrd=cbr -vsync 1 -async 1 -profile:v main -level 4.0 -s 1920x1080 -aspect 16:9 -avoid_negative_ts make_zero -strict experimental -muxdelay 0 -muxpreload 0 -output_ts_offset 0 -initial_offset 0 -start_at_zero -bufsize 3500K -minrate 3500K -maxrate 3500K -f mpegts

How can I set start_time and start_pts like I explained?

libavformat/dashdec: Support negative value of the @r attrbute of S in SegmentTimelin...

$
0
0
libavformat/dashdec: Support negative value of the @r attrbute of S in SegmentTimeline element. The following patch supports parsing negative value of the @r attribute of S in SegmentTimeline element. Example streams:
1. http://dash.edgesuite.net/dash264/TestCases/1c/qualcomm/1/MultiRate.mpd
2. http://dash.edgesuite.net/dash264/TestCases/1c/qualcomm/2/MultiRate.mpd
  • [DH] libavformat/dashdec.c

FFMPEG : Set Opacity of audio waveform color

$
0
0

I was trying to do transparency in waveform generated. It seems there is not direct option in 'showwaves' filter so I came across 'colorkey' which might help.

I am trying with following:

ffmpeg -y -loop 1 -threads 0 -i background.png -i input.mp3 -filter_complex "[1:a]aformat=channel_layouts=mono,showwaves=s=1280x100:rate=7:mode=cline:scale=sqrt:colors=0x0000ff,colorkey=color=0x0000ff:similarity=0.01:blend=0.1[v]; [0:v][v] overlay=0:155 [v1]" -map "[v1]" -map 1:a -c:v libx264 -crf 35 -ss 0 -t 5 -c:a copy -shortest -pix_fmt yuv420p -threads 0 test_org.mp4

So I wanted to blue color waveform and wanted to set opacity of that 1 to 0 let say. But it seems this generates blackbox which is actual background of '1280x100'. I want to keep background of waveform transparent and just wanted to change opacity of waveform only.

Result of my command: enter image description here

enter image description here

Can you please let me know your suggestion

@Gyan, this is with reference to following question which you have answered.

Related last question

Thanks, Hardik

-- enter image description here

ffmpeg_concat is not running from shell_exec() function?

$
0
0

command is running on CLI on Linux.

ffmpeg-concat -t circleopen -d 750 -o /var/www/html/testing_video/huzzah.mp4 /var/www/html/testing_video/1.mp4 /var/www/html/testing_video/2.mp4 /var/www/html/testing_video/3.mp4 2>&1

Perfectly working on CLI. But when i used with the shell_exec command from a php file it gives an error.

failed to create OpenGL context at module.exports (/usr/lib/node_modules/ffmpeg-concat/lib/context.js:24:11) at module.exports (/usr/lib/node_modules/ffmpeg-concat/lib/render-frames.js:19:21) at module.exports (/usr/lib/node_modules/ffmpeg-concat/lib/index.js:53:32) at

Anyhow i debug the code function is calling and the parameters width and height is passing but still returning null.

Node version:v8.11.3

Could not load file or assembly 'accord.video.ffmpeg.x64.dll' or one of its dependencies

$
0
0

I'm using Accord.video.ffmpeg.x64. My project is built in x64 as well. It is a click once windows forms application. I installed accord through nuget. C++ redistributor is installed.

Everything works fine when I run the program from debug. But when I publish it and try to run it (on the same machine or any other machine) I get the error "could not load file or assembly 'accord.video.ffmpeg.x64.dll' or one of its dependencies."

Thank you for any help you can provide.

FFMPEG : Capture and Stream from Android Mobile Camera

$
0
0

I am checking for how we can use ffmpeg command line in android like using WritingMinds ffmpeg compiled for Android or any other Git project.

I am trying to set following command:

-loglevel trace -f android_camera -camera_index 0 -video_size hd720 -framerate 30 -input_queue_size 2 -i discarded -map 0 -vcodec libx264 -y /sdcard/editor_input/out.mp4"

But it gives Unrecognized option 'camera_index'.

There are some project like "https://github.com/CrazyOrr/FFmpegRecorder" which has Recorder App to store MP4 file but they seems uses FFMPEG APIs and code for getting frames and all, I Just want to directly use ffmpeg as commandline like we use in Linux.

Can someone please guide what is best way to capture from android camera and stream MPEGTS stream on UDP?

Thanks.

-- https://github.com/CrazyOrr/FFmpegRecorder

Output image with correct aspect with ffmpeg

$
0
0

I have a mkv video with the following properties (obtained with mediainfo):

Width : 718 pixels
Height : 432 pixels
Display aspect ratio : 2.35:1
Original display aspect ratio : 2.35:1

I'd like to take screenshots of it at certain times:

ffmpeg -ss 4212 -i filename.mkv -frames:v 1 -q:v 2 out.jpg

This will produce a 718x432 jpg image, but the aspect ratio is wrong (the image is "squeezed" horizontally). AFAIK, the output image should be 1015*432 (with width=height * DAR). Is this calculation correct?

Is there a way to have ffmpeg output images with the correct size/AR for all videos (i.e. no "hardcoded" values)? I tried playing with the setdar/setsar filters without success.

Also, out of curiosity, trying to obtain SAR and DAR with ffmpeg produces:

Stream #0:0(eng): Video: h264 (High), yuv420p(tv, smpte170m/smpte170m/bt709, progressive),
718x432 [SAR 64:45 DAR 2872:1215], SAR 155:109 DAR 55645:23544, 24.99 fps, 24.99 tbr, 1k tbn, 49.98 tbc (default)

2872/1215 is 2.363, so a slightly different value than what mediainfo reported. Anyone knows why?

Apache protected HLS stream, weird behaviour

$
0
0

I have 2 webcam streams published (restreamed from local LAN) via https and apache to the web.

in my vhost file I have following setting:


AuthName "Member Only"
AuthType Basic
AuthUserFile /var/www/.htpasswd
require valid-user

When I start the stream via vlc it works fine, VLC and others (Browsers) request the user and password from the user and streaming starts. And it plays fine for infinite time!

vlc -I dummy http://192.168.178.21:8080/ vlc://quit --sout='#std{access=livehttp{seglen=10,delsegs=true,numsegs=5,index=/var/www/https/webcam/dach/stream1.m3u8,index-url='"stream1-########.ts"'},mux=ts{use-key-frames},dst=/var/www/https/webcam/dach/stream1-########.ts}'

When I start it with avconv/ffmpeg:

avconv -re -i http://192.168.178.21:8080 -c copy -flags -global_header -hls_time 10 -hls_list_size 6 -hls_wrap 10 -g 24 /var/www/https/webcam/dach/stream1.m3u8

VLC and Browsers request the Password and it plays for a few seconds. but then after the first segments and the player requests the new .m3u8 it fails with

401:Unauthorized

When i turn off authentication it works fine again with avconv, so I guess this has something to do with apache?

Here's the generated m3u8 by VLC:

#EXTM3U
#EXT-X-TARGETDURATION:10
#EXT-X-VERSION:3
#EXT-X-ALLOW-CACHE:NO
#EXT-X-MEDIA-SEQUENCE:5
#EXTINF:10.00,
stream1-00000005.ts
#EXTINF:10.00,
stream1-00000006.ts
#EXTINF:10.00,
stream1-00000007.ts
#EXTINF:10.00,
stream1-00000008.ts
#EXTINF:10.00,
stream1-00000009.ts

and the one by avconv:

#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGETDURATION:10
#EXT-X-MEDIA-SEQUENCE:5
#EXTINF:10,
stream15.ts
#EXTINF:10,
stream16.ts
#EXTINF:10,
stream17.ts
#EXTINF:10,
stream18.ts
#EXTINF:10,
stream19.ts
#EXTINF:10,
stream10.ts

Can somebody give me a hint? I need to use avconv since it allows me to map the video stream before the audio stream (video needs to be 0 and Webcam stream is often mixed up)

thanks

FFMpeg slide text from right to left an leave from left to right after x seconds

$
0
0

I'm trying to slide a text inside my video from right to left and leaving the video after 13 seconds. The text has to stay there for 13 seconds and the leave in the opposite direction the video.

Right now I'm using the following command: ffmpeg -i Pool\ scores.m4v -vf "[in]drawtext=fontfile=/usr/share/fonts/truetype/msttcorefonts/Arial.ttf:fontsize=40:fontcolor=white:x=900:y=570:text='Marco':enable='between(t,11,24)' [out]" -c:v libx264 scrolling.m4v

So the text Marco have to be at x=900 and y=570. Thats also the coordinate where the text have to leave from.

The idea is to create a pool score board where the video is auto generated with dynamic text. In this image there is an example of what the animation have to look like. I've to retime it tom match the same speed. See example

Thanks in advance!!

-- See example

How to check live stream is still alive use "ffprobe" command?

$
0
0

I want to schedule a job script to check a live stream is still alive use "ffprobe" command. So that I can change database state for those steam already dead.

I tried the command:

ffprobe -v quiet -print_format json -show_streams rtmp://xxxx

but when the stream is not avaiable, the command will hang.
I tried add -timeout argument, but still cannot work properly.

Evolution #4169 (Nouveau): Donner du sens à #INTRODUCTION en SPIP3

$
0
0

La balise #INTRODUCTION n'a pas étéélargie à tous les objets et elle ne fonctionne que pour les articles et les rubriques : Cf le source https://core.spip.net/projects/spip/repository/entry/spip/ecrire/public/balises.php#L810

Tous les objets qui ont un champ #CHAPO devraient bénéficier du même fonctionnement que les articles pour le calcul de leur #INTRODUCTION (par exemple les newsletter). En effet, si on utilise la balise #INTRODUCTION, c'est qu'on veut fusionner chapeau et texte, sinon on utilise pas cette balise mais simplement #TEXTE. Donc ça ne devrait léser personne de mettre à jour #INTRODUCTION pour que ça utilise systématiquement le chapeau, pour toutes les tables qui ont un chapeau.

Cf la discussion https://contrib.spip.net/Newsletters#forum497836

Viewing all 118079 articles
Browse latest View live