Quantcast
Channel: MediaSPIP
Viewing all 118119 articles
Browse latest View live

ffmpeg not working with filenames that have whitespace

$
0
0

I'm using FFMPEG to measure the duration of videos stored in an Amazon S3 Bucket.

I've read the FFMPEG docs, and they explicitly state that all whitespace and special characters need to be escaped, in order for FFMPEG to handle them properly:

See docs 2.1 and 2.1.1: https://ffmpeg.org/ffmpeg-utils.html

However, when dealing with files whose filenames contain whitespace, ffmpeg fails to render a result.

I've tried the following, with no success

ffmpeg -i "http://s3.mybucketname.com/videos/my\ video\ file.mov" 2>&1 | grep Duration | awk '{print $2}' | tr -d
ffmpeg -i "http://s3.mybucketname.com/videos/my video file.mov" 2>&1 | grep Duration | awk '{print $2}' | tr -d
ffmpeg -i "http://s3.mybucketname.com/videos/my'\' video'\' file.mov" 2>&1 | grep Duration | awk '{print $2}' | tr -d
ffmpeg -i "http://s3.mybucketname.com/videos/my\ video\ file.mov" 2>&1 | grep Duration | awk '{print $2}' | tr -d

However, if I strip out the whitespace in the filename – all is well, and the duration of the video is returned.

-- https://ffmpeg.org/ffmpeg-utils.html

Viewing video stream from a network camera

$
0
0

I have placed the below code in a webpage and viewed over my local webserver

Cam1 feed

The stream is configured to output mpeg and not RTSP

This works without issues when viewed on a windows computer with chrome & firefox and is OK when viewed on my Android phone with Chrome. But this does not work when viewed on a iphone or ipad with either Chrome or Safari.

After searching for a while I am coming to the conclusion to try and encode into a WebRTC to add to a video tag.

Is this the right approach as it was fairly easy to get working on a computer and android phone.

Getting "Your FFProbe version is too old..." error in Laravel during file upload with FFMpeg

$
0
0

currently I'm working on a video uploader (to S3) script in Laravel. I'd like to get some info about the uploaded video and later I'd like to create thumbnails as well (with the help of this plugin). As a first step I've installed the FFMpeg with brew:

$ brew update
$ brew upgrade
$ brew cleanup
$ brew install ffmpeg --force
$ brew link ffmpeg

Then in composer

$ composer require php-ffmpeg/php-ffmpeg

When I'm checking the installation I get the following

which ffmpeg
/usr/local/bin/ffmpeg

which ffprobe
/usr/local/bin/ffprobe

By checking the version:

ffmpeg -version
ffmpeg version 4.3.1 Copyright (c) 2000-2020 the FFmpeg developers
 built with Apple clang version 11.0.3 (clang-1103.0.32.62)
 configuration: --prefix=/usr/local/Cellar/ffmpeg/4.3.1 --enable-shared --enable-pthreads --enable-version3 --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libdav1d --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-libsoxr --enable-videotoolbox --disable-libjack --disable-indev=jack
 libavutil 56. 51.100 / 56. 51.100
 libavcodec 58. 91.100 / 58. 91.100
 libavformat 58. 45.100 / 58. 45.100
 libavdevice 58. 10.100 / 58. 10.100
 libavfilter 7. 85.100 / 7. 85.100
 libavresample 4. 0. 0 / 4. 0. 0
 libswscale 5. 7.100 / 5. 7.100
 libswresample 3. 7.100 / 3. 7.100
 libpostproc 55. 7.100 / 55. 7.100

The path also seems to be ok

echo $PATH
/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin

But when I'm trying to upload a video file (sample.mp4) I have the following error message: Your FFProbe version is too old and does not support -help option, please upgrade.

Here's the snippet from my code to test the upload:

 use FFMpeg;


 public function upload(Request $request)
 {
 if ($request->hasFile('files')) {
 $files = $request->file('files');
 foreach ($files as $key => $file) {
 $filename = pathinfo($file->getClientOriginalName(), PATHINFO_FILENAME);
 $extension = $file->getClientOriginalExtension();
 $filename = str_slug($filename).'.'.$extension;
 Storage::disk('s3Files')->put($filename, file_get_contents($file),'public');
 $fileurl = \Config::get('s3.files').$filename;

 $ffprobe = FFMpeg\FFProbe::create([
 'ffmpeg.binaries' =>'/usr/local/bin/ffmpeg',
 'ffprobe.binaries' =>'/usr/local/bin/ffprobe'
 ]);

 $filesave = new File();
 $filesave->name = $filename;
 $filesave->type = $file->getClientMimeType();
 $filesave->size = $file->getSize();
 $filesave->duration = $ffprobe->format($fileurl)->get('duration');
 $filesave->save();

 }
 }
 }

Now I spent hours to try to find a solution (also checking this thread here, but I couldn't solve the issue.

My dev environment runs on Mac OS X 10.15.5, with Nginx and PHP 7.4.

Do you have any idea how could I fix this problem?

-- this

ffmpeg separate images from %3.png format

$
0
0

I want to feed an input -i input%3.png into a filter graph and then use those inputs in a -filter_complex like I normally would with [x:v] or [0:v:x] where x is the index. Both of those don't work, throwing errors like Invalid file index 1 in filtergraph description or stream specifier :v:1 in ... matches no stream

No such file error using pydub on OSX with pycharm

$
0
0

My ultimate aim is to run the code snippet below on Lambda but as I was having difficulties, I tried running it on my mac. I get the same error running with python2.7 on OSX as I do when I run it on AWS lambda.

The code is:

from pydub import AudioSegment
import os

def test():
 print("Starting")

 files = [f for f in os.listdir('.') if os.path.isfile(f)]
 for f in files:
 print (f)

 sound = AudioSegment.from_mp3("test.mp3")

test()

The output of the code from pycharm is:

Starting
ffmpeg
.DS_Store
requirements.txt
concat.py
test.mp3
ffprobe
Traceback (most recent call last):
 File "/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py", line 1438, in _exec
 pydev_imports.execfile(file, globals, locals) # execute the script
 File "/Users/mh/Desktop/sC/concat/concat.py", line 13, in 
 test()
 File "/Users/mh/Desktop/sC/concat/concat.py", line 11, in test
 sound = AudioSegment.from_mp3("test.mp3")
 File "/Users/mh/Desktop/sC/concat/venv2.7/lib/python2.7/site-packages/pydub/audio_segment.py", line 738, in from_mp3
 return cls.from_file(file, 'mp3', parameters=parameters)
 File "/Users/mh/Desktop/sC/concat/venv2.7/lib/python2.7/site-packages/pydub/audio_segment.py", line 685, in from_file
 info = mediainfo_json(orig_file, read_ahead_limit=read_ahead_limit)
 File "/Users/mh/Desktop/sC/concat/venv2.7/lib/python2.7/site-packages/pydub/utils.py", line 274, in mediainfo_json
 res = Popen(command, stdin=stdin_parameter, stdout=PIPE, stderr=PIPE)
 File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 394, in __init__
 errread, errwrite)
 File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 1047, in _execute_child
 raise child_exception
OSError: [Errno 2] No such file or directory

Is this actually a problem with pydub/ffmpeg/ffprobe, rather than the location of my mp3 file? As I'm trying to package this project for Lambda, I've put executable versions of ffmpeg and ffprobe in the root of the project, rather than installed them to my OS. Before I did this, pydub complained that it couldn't find ffmpeg. It's now not complaining but, could I have chosen the wrong binary?

Any ideas?

Embedding pure python(Cpython) on android

$
0
0

I want to create movie download app for android for learn. To make develop easily, I would like to use youtube-dl for downloader backend.

So I want to embed Cpython runtime and ffmpeg (for convert movie format) to Android app. Is it able to do with android NDK?

Note that I know more better ways are exist.(like use java-friend python runtime or downloader implement as online server)

But I want to try to embed python and ffmpeg in app for learn.

Can it with Android NDK?

HLS service using FFMPEG

$
0
0

I am thinking of creating an environment for HLS using FFMPEG. I would like to encode iso files to m3u8 files automatically. Then, I am looking for some companies which can configure the environment. Anyone knows good service provider regarding HLS?

Thank you. :)

Stream image sensor data as video in a Non-Live stream

$
0
0

I have image sensor data (in my case in a ROS bag file), that means many single images that I have to stream to a webinterface. The goal is to simply have a video in the browser where I can select the time on the seek bar, that means I don't want a live stream.

Live encoding and streaming with FFmpeg works.

What is the best approach? The total Content-Length is unkown, since it is on-the-fly encoding, so I don't know how to implement 206 Partial Content where I send a Content-Range. Even if this was working, I need to encode valid partial content with FFmpeg.


Class 'Pbmedia\LaravelFFMpeg\FFMpeg' not found

$
0
0

Inside an Artisan command I have this


 FFMpeg::fromDisk('songs')
 ->open('this.mp4')
 ->export()
 ->toDisk('converted_songs')
 ->inFormat(new \FFMpeg\Format\Audio\Aac)
 ->save('yesterday.aac');

At the top of the file I already added

use Pbmedia\LaravelFFMpeg\FFMpeg;

I am getting this error

Class 'Pbmedia\LaravelFFMpeg\FFMpeg' not found

I am also using Laravel-zero, and I have tried everything possible. Been stuck on this for hours now, any idea?

Audio effect ( a 20ms delay between right and the left channel) using Web Audio API or any Javascript Audio Library like howler.js, tone.js?

$
0
0

I was wondering if there any option in howler.js, tone.js or any other javascript audio library which I can use to add a 20ms delay between the right and the left channel which makes the audio listening experience more immersive.

Can it be achieved using Audio sprites with howler.js ? (but I guess it can't separate the right and the left channels) https://medium.com/game-development-stuff/how-to-create-audiosprites-to-use-with-howler-js-beed5d006ac1

Is there any?

Have also asked the same quest here: https://github.com/goldfire/howler.js/issues/1374

I usually enable this option under ffdshow audio processor while playing audio using MPC-HC (Mega Version) on my pc. I was wondering how can I do it using Web Audio API or howler.js ?

enter image description here

-- https://medium.com/game-development-stuff/how-to-create-audiosprites-to-use-with-howler-js-beed5d006ac1, https://github.com/goldfire/howler.js/issues/1374, enter image description here

The system cannot find the file specified with ffmpeg

$
0
0

In the process of using the ffmpeg module to edit video files i used the subprocess module

The code is as follows:

#trim bit

import subprocess
import os
seconds = "4"
mypath=os.path.abspath('trial.mp4')
subprocess.call(['ffmpeg', '-i',mypath, '-ss', seconds, 'trimmed.mp4'])

Error message:

Traceback (most recent call last):
 File "C:\moviepy-master\resizer.py", line 29, in 
 subprocess.call(['ffmpeg', '-i',mypath, '-ss', seconds, 'trimmed.mp4'])
 File "C:\Python27\lib\subprocess.py", line 168, in call
 return Popen(*popenargs, **kwargs).wait()
 File "C:\Python27\lib\subprocess.py", line 390, in __init__
 errread, errwrite)
 File "C:\Python27\lib\subprocess.py", line 640, in _execute_child
 startupinfo)
WindowsError: [Error 2] The system cannot find the file specified

After looking up similar problems i understood that the module is unable to pick the video file because it needs its path, so i took the absolute path. But in spite of that the error still shows up. The module where this code was saved and the video file trial.mp4 are in the same folder.

How to generate MPEG-DASH stream with ClearKey DRM using FFmpeg

$
0
0

I want to create an MPEG-DASH stream with ClearKey DRM using FFmpeg.

The stream generation is already complete and working (using the C API)

How could I (using either C or the CLI) add ClearKey DRM to the stream?

ffmpeg : md5 of m3u8 playlists generated from same input video with different segment durations (after applying video filter) don't match

$
0
0

Here are a few commands I am using to convert and transize a video in mp4 format to a m3u8 playlist.

For a given input video (mp4 format), generate multiple video only segments with segment duration 30s

ffmpeg -loglevel error -i input.mp4 -dn -sn -an -c:v copy -bsf:v h264_mp4toannexb -copyts -start_at_zero -f segment -segment_time 30 30%03d.mp4 -dn -sn -vn -c:a copy audio.aac

Apply video filter (in this case scaling) on each segment and convert it to a m3u8 format

ls 30*.mp4 | parallel 'ffmpeg -loglevel error -i {} -vf scale=-2:144 -hls_list_size 0 {}.m3u8'

Store the list of m3u8 files generated in list.txt in this format file 'segment-name.m3u8'

for f in 30*.m3u8; do echo "file '$f'">> list.txt; done

Using concat demuxer, combine all segment files (which are in m3u8 format) and the audio to get one final m3u8 playlist pointing to segments with duration of 10s.

ffmpeg -loglevel error -f concat -i list.txt -i audio.aac -c copy -hls_list_size 0 -hls_time 10 output_30.m3u8

I can change the segment duration in the first step from 30s to 60s, and compare the md5 of the final m3u8 playlist generated in both the cases using this command

ffmpeg -loglevel error -i  -f md5 - 

The md5 of the output files differ i.e video streams of output_30.m3u8 and output_60.m3u8 are not the same.

Can anyone elaborate on this?

(I expected the md5 to be the same)

Unknown encoder 'libx264'

$
0
0

I installed ffmpeg 0.8.9 on ubuntu11 by

./configure --enable-gpl --enable-nonfree --enable-pthreads --enable-libfaac --enable-libmp3lame --enable-libx264

When I run it

ffmpeg -y -i test.mp4 -f mpegts -acodec libmp3lame -ar 48000 -ab 64k -vcodec libx264 -b 250k -flags +loop -cmp +chroma -partitions +parti4x4+partp8x8+partb8x8 -subq 5 -trellis 1 -refs 1 -coder 0 -me_range 16 -keyint_min 25 -sc_threshold 40 -i_qfactor 0.71 -bt 250k -maxrate 250k -bufsize 250k -rc_eq 'blurCplx^(1-qComp)' -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -level 30 -aspect 320:240 -g 30 -async 2 a.ts

It said

Unknown encoder 'libx264'

(Note: the same error could occour with avconv)

How can I fix this? Thanks!

ffmpeg issue when using it from kurento to rtmp stream

$
0
0

I successfully managed to connect ffmpeg with an RTP endpoint of my kurento server. On many tries before I got a "Connection timed out" due my docker configuration. In my docker logs I see following now after RTP endpoint creation and starting ffmpeg:

streamy-server_1 | 2020-07-15 08:30:20.397 INFO 49 --- [nio-8080-exec-1] net.bramp.ffmpeg.RunProcessFunction : ffmpeg -y -v error -protocol_whitelist file,http,https,tcp,tls,udp,rtp -rtbufsize 1500M -re -i /tmp/test.sdp -f flv -vcodec libx264 -pix_fmt yuv420p -s 640x480 -r 20/1 -b:v 1000000 -acodec libmp3lame -ar 44100 -b:a 1000000 -bufsize 4000k -maxrate 1000k -profile:v baseline -deinterlace -preset medium -g 60 -r 30 rtmps://live-api-s.facebook.com:443/rtmp/3163232097002611?xyz
streamy-server_1 | [h264 @ 0x56538aeae0a0] non-existing PPS 0 referenced
streamy-server_1 | Last message repeated 1 times
streamy-server_1 | [h264 @ 0x56538aeae0a0] decode_slice_header error
streamy-server_1 | [h264 @ 0x56538aeae0a0] no frame!
streamy-server_1 | [h264 @ 0x56538aeae0a0] non-existing PPS 0 referenced
streamy-server_1 | Last message repeated 1 times
streamy-server_1 | [h264 @ 0x56538aeae0a0] decode_slice_header error
streamy-server_1 | [h264 @ 0x56538aeae0a0] no frame!
streamy-server_1 | [h264 @ 0x56538aeae0a0] non-existing PPS 0 referenced
streamy-server_1 | Last message repeated 1 times
streamy-server_1 | [h264 @ 0x56538aeae0a0] decode_slice_header error
streamy-server_1 | [h264 @ 0x56538aeae0a0] no frame!
streamy-server_1 | [h264 @ 0x56538aeae0a0] non-existing PPS 0 referenced
streamy-server_1 | Last message repeated 1 times
streamy-server_1 | [h264 @ 0x56538aeae0a0] decode_slice_header error
streamy-server_1 | [h264 @ 0x56538aeae0a0] no frame!
streamy-server_1 | [h264 @ 0x56538aeae0a0] non-existing PPS 0 referenced
streamy-server_1 | Last message repeated 1 times
[...] (this is repeated for the next ~20-30 seconds, then continues with:)
kurento_1 | 0:03:00.152967182 1 0x7faa58093b30 INFO KurentoWebSocketTransport WebSocketTransport.cpp:296:keepAliveSessions: Keep alive 998a9271-615e-490c-acce-6bc22d9592f7
streamy-server_1 | Too many packets buffered for output stream 0:0.
streamy-server_1 | 2020-07-15 08:30:38.361 ERROR 49 --- [nio-8080-exec-1] c.maximummgt.streamy.WebsocketsHandler : Unknown error while websockets session
streamy-server_1 | 
streamy-server_1 | java.lang.RuntimeException: java.io.IOException: ffmpeg returned non-zero exit status. Check stdout.
streamy-server_1 | at net.bramp.ffmpeg.job.SinglePassFFmpegJob.run(SinglePassFFmpegJob.java:46) ~[ffmpeg-0.6.2.jar:0.6.2]

At facebook live it does not receive the video stream. In log I just see this message:

Facebook has not received video signal from the video source for some time. Check that the connectivity between the video source and Facebook is sufficient for the source resolution and bitrate. Check your video encoder logs for details. If problems persist, consider improving connection quality or reducing the bitrate of your video source.

On Kurento Java code I am doing following:

  1. Create an RTP endpoint

  2. Connect it with the video source from the user

  3. Create an SDP offer and save it to a file (currently /tmp/test.sdp)

  4. Process SDP offer with the RTP endpoint

  5. Start the ffmpeg process with (net.bramp.ffmpeg.builder.FFmpegBuilder):

    FFmpegBuilder builder = new FFmpegBuilder()
    .addExtraArgs("-protocol_whitelist", "file,http,https,tcp,tls,udp,rtp")
    .addExtraArgs("-rtbufsize", "1500M")
    .addExtraArgs("-re")
    .setInput("/tmp/test.sdp")
    .addOutput(rtmpURL)
    .setFormat("flv")
    .addExtraArgs("-bufsize", "4000k")
    .addExtraArgs("-maxrate", "1000k")
    .setAudioCodec("libmp3lame")
    .setAudioSampleRate(FFmpeg.AUDIO_SAMPLE_44100)
    .setAudioBitRate(1_000_000)
    .addExtraArgs("-profile:v", "baseline")
    .setVideoCodec("libx264")
    .setVideoPixelFormat("yuv420p")
    .setVideoResolution(width, height)
    .setVideoBitRate(1_000_000)
    .setVideoFrameRate(20)
    .addExtraArgs("-deinterlace")
    .addExtraArgs("-preset", "medium")
    .addExtraArgs("-g", "60")
    .addExtraArgs("-r", "30")
    .done();
    
    FFmpegExecutor executor = new FFmpegExecutor(ffmpeg, ffprobe);
    executor.createJob(builder).run();
    
    

Can somebody guide here on this issue? Thanks in advance

EDIT 01: I disabled now

// FFmpegExecutor executor = new FFmpegExecutor(ffmpeg, ffprobe);
// executor.createJob(builder).run();

so that ffmpeg does not start automatically. After java created the test.sdp, I ran ffmpeg by myself in the console not to stream to facebook but to a mp4:

ffmpeg -loglevel debug -protocol_whitelist file,crypto,udp,rtp -re -vcodec libvpx -acodec opus -i /tmp/test.sdp -vcodec libx264 -acodec aac -y output.mp4

The output looks as following - I interrupted it after ~25s with CTRL + C

root@app:/var/www# ffmpeg -loglevel debug -protocol_whitelist file,crypto,udp,rtp -re -vcodec libvpx -acodec opus -i /tmp/test.sdp -vcodec libx264 -acodec aac -y output.mp4
ffmpeg version 3.4.6-0ubuntu0.18.04.1 Copyright (c) 2000-2019 the FFmpeg developers
 built with gcc 7 (Ubuntu 7.3.0-16ubuntu3)
 configuration: --prefix=/usr --extra-version=0ubuntu0.18.04.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
 libavutil 55. 78.100 / 55. 78.100
 libavcodec 57.107.100 / 57.107.100
 libavformat 57. 83.100 / 57. 83.100
 libavdevice 57. 10.100 / 57. 10.100
 libavfilter 6.107.100 / 6.107.100
 libavresample 3. 7. 0 / 3. 7. 0
 libswscale 4. 8.100 / 4. 8.100
 libswresample 2. 9.100 / 2. 9.100
 libpostproc 54. 7.100 / 54. 7.100
Splitting the commandline.
Reading option '-loglevel' ... matched as option 'loglevel' (set logging level) with argument 'debug'.
Reading option '-protocol_whitelist' ... matched as AVOption 'protocol_whitelist' with argument 'file,crypto,udp,rtp'.
Reading option '-re' ... matched as option 're' (read input at native frame rate) with argument '1'.
Reading option '-vcodec' ... matched as option 'vcodec' (force video codec ('copy' to copy stream)) with argument 'libvpx'.
Reading option '-acodec' ... matched as option 'acodec' (force audio codec ('copy' to copy stream)) with argument 'opus'.
Reading option '-i' ... matched as input url with argument '/tmp/test.sdp'.
Reading option '-vcodec' ... matched as option 'vcodec' (force video codec ('copy' to copy stream)) with argument 'libx264'.
Reading option '-acodec' ... matched as option 'acodec' (force audio codec ('copy' to copy stream)) with argument 'aac'.
Reading option '-y' ... matched as option 'y' (overwrite output files) with argument '1'.
Reading option 'output.mp4' ... matched as output url.
Finished splitting the commandline.
Parsing a group of options: global .
Applying option loglevel (set logging level) with argument debug.
Applying option y (overwrite output files) with argument 1.
Successfully parsed a group of options.
Parsing a group of options: input url /tmp/test.sdp.
Applying option re (read input at native frame rate) with argument 1.
Applying option vcodec (force video codec ('copy' to copy stream)) with argument libvpx.
Applying option acodec (force audio codec ('copy' to copy stream)) with argument opus.
Successfully parsed a group of options.
Opening an input file: /tmp/test.sdp.
[NULL @ 0x55c2344e6a00] Opening '/tmp/test.sdp' for reading
[sdp @ 0x55c2344e6a00] Format sdp probed with size=2048 and score=50
[sdp @ 0x55c2344e6a00] audio codec set to: pcm_mulaw
[sdp @ 0x55c2344e6a00] audio samplerate set to: 44000
[sdp @ 0x55c2344e6a00] audio channels set to: 1
[sdp @ 0x55c2344e6a00] video codec set to: h264
[sdp @ 0x55c2344e6a00] RTP Packetization Mode: 1
[udp @ 0x55c2344e90c0] end receive buffer size reported is 131072
[udp @ 0x55c2344e9320] end receive buffer size reported is 131072
[sdp @ 0x55c2344e6a00] setting jitter buffer size to 500
[udp @ 0x55c2344ea0a0] end receive buffer size reported is 131072
[udp @ 0x55c2344ea180] end receive buffer size reported is 131072
[sdp @ 0x55c2344e6a00] setting jitter buffer size to 500
[sdp @ 0x55c2344e6a00] Before avformat_find_stream_info() pos: 305 bytes read:305 seeks:0 nb_streams:2
[libvpx @ 0x55c2344eeb80] v1.7.0
[libvpx @ 0x55c2344eeb80] --prefix=/usr --enable-pic --enable-shared --disable-install-bins --disable-install-srcs --size-limit=16384x16384 --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --target=x86_64-linux-gcc
[libvpx @ 0x55c2344eeb80] Invalid sync code e06101.
[libvpx @ 0x55c2344eeb80] Failed to decode frame: Bitstream not supported by this decoder
[libvpx @ 0x55c2344eeb80] Invalid sync code e06101.
[libvpx @ 0x55c2344eeb80] Failed to decode frame: Bitstream not supported by this decoder
 Last message repeated 1 times
[libvpx @ 0x55c2344eeb80] Invalid sync code e06101.
[libvpx @ 0x55c2344eeb80] Failed to decode frame: Bitstream not supported by this decoder
[sdp @ 0x55c2344e6a00] Non-increasing DTS in stream 1: packet 3 with DTS 5940, packet 4 with DTS 5940
[...]
[sdp @ 0x55c2344e6a00] Non-increasing DTS in stream 1: packet 956 with DTS 2027726, packet 957 with DTS 2027726
[libvpx @ 0x55c2344eeb80] Failed to decode frame: Bitstream not supported by this decoder
[libvpx @ 0x55c2344eeb80] Invalid sync code e06101.
[libvpx @ 0x55c2344eeb80] Failed to decode frame: Bitstream not supported by this decoder
[libvpx @ 0x55c2344eeb80] Invalid sync code e06101.
[libvpx @ 0x55c2344eeb80] Failed to decode frame: Bitstream not supported by this decoder
[libvpx @ 0x55c2344eeb80] Invalid sync code e06101.
[libvpx @ 0x55c2344eeb80] Failed to decode frame: Bitstream not supported by this decoder
[libvpx @ 0x55c2344eeb80] Invalid sync code 4a3bd8.
[sdp @ 0x55c2344e6a00] Non-increasing DTS in stream 1: packet 960 with DTS 2036690, packet 961 with DTS 2036690
[libvpx @ 0x55c2344eeb80] Failed to decode frame: Bitstream not supported by this decoder
[sdp @ 0x55c2344e6a00] interrupted
[sdp @ 0x55c2344e6a00] decoding for stream 1 failed
[sdp @ 0x55c2344e6a00] rfps: 30.000000 0.000926
[sdp @ 0x55c2344e6a00] rfps: 60.000000 0.003706
[sdp @ 0x55c2344e6a00] rfps: 120.000000 0.014824
[sdp @ 0x55c2344e6a00] Setting avg frame rate based on r frame rate
[sdp @ 0x55c2344e6a00] Could not find codec parameters for stream 1 (Video: vp8 (libvpx), 1 reference frame, none(progressive)): unspecified size
Consider increasing the value for the 'analyzeduration' and 'probesize' options
[sdp @ 0x55c2344e6a00] After avformat_find_stream_info() pos: 305 bytes read:305 seeks:0 frames:962
Input #0, sdp, from '/tmp/test.sdp':
 Metadata:
 title : KMS
 Duration: N/A, start: 0.033000, bitrate: N/A
 Stream #0:0, 0, 1/44000: Audio: opus, 48000 Hz, mono, fltp
 Stream #0:1, 962, 1/90000: Video: vp8, 1 reference frame, none(progressive), 30 fps, 30 tbr, 90k tbn, 90k tbc
Successfully opened the file.
Parsing a group of options: output url output.mp4.
Applying option vcodec (force video codec ('copy' to copy stream)) with argument libx264.
Applying option acodec (force audio codec ('copy' to copy stream)) with argument aac.
Successfully parsed a group of options.
Opening an output file: output.mp4.
[file @ 0x55c234563320] Setting default whitelist 'file,crypto'
Successfully opened the file.
[libvpx @ 0x55c2344eb0e0] v1.7.0
[libvpx @ 0x55c2344eb0e0] --prefix=/usr --enable-pic --enable-shared --disable-install-bins --disable-install-srcs --size-limit=16384x16384 --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --target=x86_64-linux-gcc
Stream mapping:
 Stream #0:1 -> #0:0 (vp8 (libvpx) -> h264 (libx264))
 Stream #0:0 -> #0:1 (opus (native) -> aac (native))
Press [q] to stop, [?] for help
Finishing stream 0:0 without any data written to it.
Finishing stream 0:1 without any data written to it.
detected 2 logical cores
[graph_1_in_0_0 @ 0x55c234561360] Setting 'time_base' to value '1/48000'
[graph_1_in_0_0 @ 0x55c234561360] Setting 'sample_rate' to value '48000'
[graph_1_in_0_0 @ 0x55c234561360] Setting 'sample_fmt' to value 'fltp'
[graph_1_in_0_0 @ 0x55c234561360] Setting 'channel_layout' to value '0x4'
[graph_1_in_0_0 @ 0x55c234561360] tb:1/48000 samplefmt:fltp samplerate:48000 chlayout:0x4
[format_out_0_1 @ 0x55c2345611e0] Setting 'sample_fmts' to value 'fltp'
[format_out_0_1 @ 0x55c2345611e0] Setting 'sample_rates' to value '96000|88200|64000|48000|44100|32000|24000|22050|16000|12000|11025|8000|7350'
[AVFilterGraph @ 0x55c234560620] query_formats: 4 queried, 9 merged, 0 already done, 0 delayed
Nothing was written into output file 0 (output.mp4), because at least one of its streams received no packets.
frame= 0 fps=0.0 q=0.0 Lsize= 0kB time=-577014:32:22.77 bitrate= -0.0kbits/s speed=N/A
video:0kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Input file #0 (/tmp/test.sdp):
 Input stream #0:0 (audio): 0 packets read (0 bytes); 0 frames decoded (0 samples);
 Input stream #0:1 (video): 0 packets read (0 bytes); 0 frames decoded;
 Total: 0 packets (0 bytes) demuxed
Output file #0 (output.mp4):
 Output stream #0:0 (video): 0 frames encoded; 0 packets muxed (0 bytes);
 Output stream #0:1 (audio): 0 frames encoded (0 samples); 0 packets muxed (0 bytes);
 Total: 0 packets (0 bytes) muxed
0 frames successfully decoded, 0 decoding errors
[AVIOContext @ 0x55c234563420] Statistics: 0 seeks, 0 writeouts
[aac @ 0x55c23458bea0] Qavg: -nan
[AVIOContext @ 0x55c2344ef6e0] Statistics: 305 bytes read, 0 seeks
Exiting normally, received signal 2.
root@app:/var/www# ls -lt
total 72
-rw-r--r-- 1 root root 0 Jul 15 09:12 output.mp4

Streaming audio from mac with ffmpeg to nginx and playback with videojs

$
0
0

I'm playing around trying to get to stream my Mac's sound to a webpage.

Here's what I have so far:

On the mac :

ffmpeg -f avfoundation -i ":2" -acodec libmp3lame -ab32k -ac 1 -f flv rtmp://myserver:8000/hls/live

On the nginx side :

events {
 worker_connections 1024;
}

rtmp {
 server {
 listen 8000;
 chunk_size 4000;
 application hls {
 live on;
 interleave on;
 hls on;
 hls_path /tmp/hls;
 }
 }
}

http {
 default_type application/octet-stream;
 sendfile off;
 tcp_nopush on;
 server {

 listen 8080;
 location /hls {
 add_header Cache-Control no-cache;
 types {
 application/vnd.apple.mpegurl m3u8;
 video/mp2t ts;
 }
 root /tmp;
 }
 }
}

Web side:




 
 Live Streaming







No matter what i do I can't get any sound (it's playing on the Mac 100% sure); i've tried also putting a video tag instead, i see the image but no sound. What's missing here ? Can this even be achieved ?

THanks

Why is ffmpeg processing time so slow?

$
0
0

I am using ffmpeg to convert and compress videos. When I upload my video file it takes a long time to process. The video can be 1.2mb or even 5.8mb or even 10mb and its still slow, I am just there staring at the screen waiting for 20 minutes or even more. What can I do to speed up the process? If you need me to provide you with my code here it is

 $viddy=new Video; 
 $file = $request->file('file'); 
 $fileName =uniqid().$file->getClientOriginalName();

 $request->file->move(public_path('/app'), $fileName);
 $name_file=uniqid().'video.mp4';
 $ffp=FFMpeg::fromDisk('local')
 ->open($fileName)
 ->addFilter(function ($filters) {
 $filters->resize(new \FFMpeg\Coordinate\Dimension(640, 480));
 })
 ->export()
 ->toDisk('s3')
 ->inFormat(new \FFMpeg\Format\Video\X264('libmp3lame'))

 ->save($name_file);

 $imageName = Storage::disk('s3')->url($name_file);


 $viddy->title=$imageName;
 $viddy->save();

Thanks in advance

ffmpeg Option not found `var_stream_map a:0 a:1`

$
0
0

I am using fluent ffmpeg with ffmpeg v4.1.3 for HLS mukti bitrate streaming but I m not able to use -var_stream_map config it says option, not found but running this command outside fluent-ffmpeg works fine

Here is my code

 const stream = ffmpeg(filePath);
 stream.outputOptions([
 '-preset slow',
 '-g 48',
 '-map 0:0',
 '-q:a:0 64k',
 '-q:a:1 128k',
 '-var_stream_map a:0 a:1',
 '-hls_time 6',
 '-f hls',
 '-hls_list_size 0',
 '-master_pl_name /tmp/master.m3u8',
 "-hls_segment_filename /tmp/v%v/fileSequence%d.t",
 "/tmp/v%v/prog_index.m3u8"
 ])
 .output('./master.m3u8')
 .on('progress', function(progress) {
 console.log('Processing: ' + progress.percent + '% done')
 })

Here is the output

ffmpeg version 4.1.3-tessus https://evermeet.cx/ffmpeg/ Copyright (c) 2000-2019 the FFmpeg developers
 built with Apple LLVM version 10.0.1 (clang-1001.0.46.3)
 configuration: --cc=/usr/bin/clang --prefix=/opt/ffmpeg --extra-version=tessus --enable-avisynth --enable-fontconfig --enable-gpl --enable-libaom --enable-libass --enable-libbluray --enable-libfreetype --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libmysofa --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopus --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi --enable-version3 --pkg-config-flags=--static --disable-ffplay
 libavutil 56. 22.100 / 56. 22.100
 libavcodec 58. 35.100 / 58. 35.100
 libavformat 58. 20.100 / 58. 20.100
 libavdevice 58. 5.100 / 58. 5.100
 libavfilter 7. 40.101 / 7. 40.101
 libswscale 5. 3.100 / 5. 3.100
 libswresample 3. 3.100 / 3. 3.100
 libpostproc 55. 3.100 / 55. 3.100
Unrecognized option 'var_stream_map a:0 a:1'.
Error splitting the argument list: Option not found

Can someone tell me how can I pass it correctly?

Delay of 20 milliseconds in the left channel of an audio file (e.g mp3) using ffmpeg?

$
0
0

How do I make a delay of 20 milliseconds in the left channel of an audio file (e.g mp3) using ffmpeg ??

I usually enable this option under ffdshow audio processor while playing audio using MPC-HC

Is it possible to do the same in howlerjs (a strong textjavascript audio library) ? (I mean this audio effect)

enter image description here

Somewhat like this kind of effect: Just delay the either channel by 20ms Like we do in Adobe Audition enter image description here

-- enter image description here, enter image description here

Android FFMpeg No such file or directory error

$
0
0

I am using ffmpeg for android (using the gradle plugin 'com.writingminds:FFmpegAndroid:0.3.2') and I am trying to crop a video to a 16:9 (w:h) ratio. The original video is 1080:1920 (w:h). When I execute the command I get an IOException No such file or directory.

The command I am using:

-i /storage/emulated/0/Movies/MyApp/result_joined.mp4 -vf crop=1080:607 -preset ultrafast /storage/emulated/0/Movies/MyApp/result_cropped.mp4

The exception:

java.io.IOException: Error running exec(). Command: 
[/data/user/0/my.package.name/files/ffmpeg, -i, /storage/emulated/0/Movies/MyApp/result_joined.mp4, -vf, crop=1080:607, -preset, ultrafast, /storage/emulated/0/Movies/MyApp/result_cropped.mp4] Working Directory: null Environment: null
Caused by: java.io.IOException: No such file or directory

After searching several stack overflow questions with no help. I also tried to save files to internal storage instead of external storage. Same result

Any help?

Viewing all 118119 articles
Browse latest View live