Quantcast
Channel: MediaSPIP
Viewing all 118201 articles
Browse latest View live

Is it possible to pregenerate m3u8 file? (the playlist only) and skip the transcoding?

$
0
0

I am trying to create a media server, and only want to transcode video when it's played. However I need the playlist in advance so that the client play can load the video metadata. Is this possible?

I want to do something like this:

Client -> GET m3u8 (pregenerated in advance)
Client -> GET ts -> Transcode only this single ts file
Client -> GET ts -> Transcode only this single ts file
Client -> GET ts -> Transcode only this single ts file

I don't want to transcode the entire video at once, I want to be able to only transcode the part that is requested.

Is this possible? Also open to using MPEG-DASH instead if needed.


Duplicate class I'm already using ffmpeg_kit_flutter_audio in my code: 5.1.0-LTS for audio conversion video_editor: ^2.1.0

$
0
0

I'm already using in my code: ffmpeg_kit_flutter_audio-5.1.0-LTS for audio conversion, when you put the video_editor: ^2.1.0 general the error, is there any way to solve it?

FAILURE: Build failed with an exception.

  • What went wrong: Execution failed for task ':app:checkDebugDuplicateClasses'.

A failure occurred while executing com.android.build.gradle.internal.tasks.CheckDuplicatesRunnable Duplicate class com.arthenica.ffmpegkit.Abi found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.AbiDetect found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.AbstractSession found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.AsyncFFmpegExecuteTask found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.AsyncFFprobeExecuteTask found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.AsyncGetMediaInformationTask found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.BuildConfig found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.CameraSupport found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.Chapter found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.FFmpegKit found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.FFmpegKitConfig found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.FFmpegKitConfig$1 found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.FFmpegKitConfig$2 found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.FFmpegKitConfig$SAFProtocolUrl found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.FFmpegSession found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.FFmpegSessionCompleteCallback found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.FFprobeKit found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.FFprobeSession found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.FFprobeSessionCompleteCallback found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.Level found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.Log found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.LogCallback found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.LogRedirectionStrategy found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.MediaInformation found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.MediaInformationJsonParser found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.MediaInformationSession found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.MediaInformationSessionCompleteCallback found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.NativeLoader found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.Packages found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.ReturnCode found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.Session found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.SessionState found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.Signal found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.Statistics found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.StatisticsCallback found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1) Duplicate class com.arthenica.ffmpegkit.StreamInformation found in modules jetified-ffmpeg-kit-audio-5.1.LTS-runtime (com.arthenica:ffmpeg-kit-audio:5.1.LTS) and jetified-ffmpeg-kit-min-gpl-5.1-runtime (com.arthenica:ffmpeg-kit-min-gpl:5.1)

 Go to the documentation to learn how to Fix dependency resolution errors.
  • Try:

Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.

BUILD FAILED in 6s Exception: Gradle task assembleDebug failed with exit code 1

-- https://help.gradle.org

FFMPEG in Android Kotlin - processed video should have specific resolution

$
0
0

I'm recording video from both the front and back cameras and I get a PIP video and a horizontal stacked video. I need to merge both videos after that. The problem with merging is that it requires both the videos (PIP and stacked) to have the same resolution and aspect ratio. This is not the case. So the FFMPEG command being executed in code to generate both these videos needs to be modified to make the resolution and aspect ratio the same.

//app -> build.gradle
implementation "com.writingminds:FFmpegAndroid:0.3.2"
 private fun connectFfmPeg() {
 val overlayX = 10
 val overlayY = 10
 val overlayWidth = 200
 val overlayHeight = 350

 outputFile1 = createVideoPath().absolutePath
 outputFile2 = createVideoPath().absolutePath
 //Command to generate PIP video
 val cmd1 = arrayOf(
 "-y",
 "-i",
 videoPath1,
 "-i",
 videoPath2,
 "-filter_complex",
 "[1:v]scale=$overlayWidth:$overlayHeight [pip]; [0:v][pip] overlay=$overlayX:$overlayY",
 "-preset",
 "ultrafast",
 outputFile1
 )

 //Command to generate horizontal stack video
 val cmd2 = arrayOf(
 "-y",
 "-i",
 videoPath1,
 "-i",
 videoPath2,
 "-filter_complex",
 "hstack",
 "-preset",
 "ultrafast",
 outputFile2
 )

 val ffmpeg = FFmpeg.getInstance(this)
 //Both commands are executed
 //Following execution code is OK
 //Omitted for brevity
 }

Here is mergeVideos() executed lastly.

 private fun mergeVideos(ffmpeg: FFmpeg) {
 //Sample command:
 /*
 ffmpeg -y -i output_a.mp4 -i output_b.mp4 \
 -filter_complex "[0:v:0][0:a:0][1:v:0][1:a:0]concat=n=2:v=1:a=1[outv][outa]" \
 -map "[outv]" -map "[outa]" -preset "ultrafast" output.mp4
 */
 finalOutputFile = createVideoPath().absolutePath

 val cmd = arrayOf(
 "-y",
 "-i",
 outputFile1,
 "-i",
 outputFile2,
 "-filter_complex",
 "[0:v:0][0:a:0][1:v:0][1:a:0]concat=n=2:v=1:a=1[outv][outa]",
 "-map", "[outv]",
 "-map", "[outa]",
 "-preset", "ultrafast",
 finalOutputFile
 )
 //Execution code omitted for brevity
}

Error: Upon execution of mergeVideos(), there is no progress or failure method called. The Logcat stays where it is and the app does not crash either.

Possible solution: Once I got the generated PIP and horizontal stacked videos to my device's local storage, I tried out some FFMPEG commands on the prompt to process them after moving them to my laptop and it works on the command line:

//First two commands can't be executed in Kotlin code
//This is the main problem
ffmpeg -i v1.mp4 -vf "scale=640:640,setdar=1:1" output_a.mp4
ffmpeg -i v2.mp4 -vf "scale=640:640,setdar=1:1" output_b.mp4
ffmpeg -y -i output_a.mp4 -i output_b.mp4 -filter_complex "[0:v:0][0:a:0][1:v:0][1:a:0]concat=n=2:v=1:a=1[outv][outa]" -map "[outv]" -map "[outa]" -preset "ultrafast" output.mp4
//Merge is successful via command prompt

Please suggest a solution

How to compile ffmpeg.dll [closed]

$
0
0

I would like to compile a ffmpeg.dll dynamic link library which includes the full range of codecs, but when I search for guides I can only find resouces to create ffmpeg.exe. Can anyone point to any resources. Thank you in advance.

how to generate video thumbnail in node.js?

$
0
0

I am building an app with node.js, I successfully uploaded the video, but I need to generate a video thumbnail for it. Currently I use node exec to execute a system command of ffmpeg to make the thumbnail.

 exec("C:/ffmpeg/bin/ffmpeg -i Video/"+ Name + " -ss 00:01:00.00 -r 1 -an -vframes 1 -f mjpeg Video/"+ Name + ".jpg")

This code is coming from a tutorial from http://net.tutsplus.com/tutorials/javascript-ajax/how-to-create-a-resumable-video-uploade-in-node-js/

the code above did generate a jpg file but it's not a thumbnail but a video screen shot, I wonder is there any other method to generate video thumbnail, or how to exec the ffmpeg command to make a real thumbnail (resized), and I prefer png file.

-- http://net.tutsplus.com/tutorials/javascript-ajax/how-to-create-a-resumable-video-uploade-in-node-js/

Convert 2 channel mp4 to each mono wav file using FFMPEG or Python code

$
0
0

I am new to audio files and its codecs.

I would like to convert a 2 channel mp4 file to a single mono wav files.

My understanding is a when I say 2 channel, it stores speech coming from each microphone in a separate channel. And when I split the channels to each individual mono wav files, I get speech of each microphone.

My intension here is to get the speech from each channel and convert them to text. This way I can set the name of the speaker based on channel.

I tried with ffmpeg and python code as well, unfortunately I get two files with same content.

Looking at the following details can someone construct ffmpeg command or python script to convert the 2 channel mp4 file to 2 individual mono wav files.

FFprobe ffprobe -i Two-Channel.mp4 -show_streams -select_streams a

Result

Metadata:
 major_brand : mp42
 minor_version : 0
 compatible_brands: isommp42
 encoder : Google
 Duration: 00:52:42.19, start: 0.000000, bitrate: 421 kb/s
 Stream #0:0[0x1](und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, bt709, progressive), 640x360 [SAR 1:1 DAR 16:9], 322 kb/s, 25 fps, 25 tbr, 12800 tbn (default)
 Metadata:
 handler_name : ISO Media file produced by Google Inc.
 vendor_id : [0][0][0][0]
 Stream #0:1[0x2](eng): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 96 kb/s (default)
 Metadata:
 handler_name : ISO Media file produced by Google Inc.
 vendor_id : [0][0][0][0]
[STREAM]
index=1
codec_name=aac
codec_long_name=AAC (Advanced Audio Coding)
profile=LC
codec_type=audio
codec_tag_string=mp4a
codec_tag=0x6134706d
sample_fmt=fltp
sample_rate=44100
channels=2
channel_layout=stereo
bits_per_sample=0
initial_padding=0
id=0x2
r_frame_rate=0/0
avg_frame_rate=0/0
time_base=1/44100
start_pts=0
start_time=0.000000
duration_ts=139452416
duration=3162.186304
bit_rate=96000
max_bit_rate=N/A
bits_per_raw_sample=N/A
nb_frames=136184
nb_read_frames=N/A
nb_read_packets=N/A
extradata_size=16
DISPOSITION:default=1
DISPOSITION:dub=0
DISPOSITION:original=0
DISPOSITION:comment=0
DISPOSITION:lyrics=0
DISPOSITION:karaoke=0
DISPOSITION:forced=0
DISPOSITION:hearing_impaired=0
DISPOSITION:visual_impaired=0
DISPOSITION:clean_effects=0
DISPOSITION:attached_pic=0
DISPOSITION:timed_thumbnails=0
DISPOSITION:non_diegetic=0
DISPOSITION:captions=0
DISPOSITION:descriptions=0
DISPOSITION:metadata=0
DISPOSITION:dependent=0
DISPOSITION:still_image=0
TAG:language=eng
TAG:handler_name=ISO Media file produced by Google Inc.
TAG:vendor_id=[0][0][0][0]
[/STREAM] 

FFmpeg command

ffmpeg -i Two-Channel.mp4 -filter_complex "pan=mono|c0=0c0" left_channel.wav

python code using FFPMEG I converted mp4 to wav and then tried below code enter image description here enter image description here

-- enter image description here, enter image description here

FFMPEG Streaming using RTMP

$
0
0

I'm trying to create a stream using ffmpeg to send a video to a Red5 Server. I've already managed to do this using this command:

ffmpeg -re -y -i "Videos\Video1.mp4" -c:v libx264 -b:v 600k -r 25 -s 640x360 -t 40 -vf yadif -b:a 64k -ac 1 -ar 44100 -f flv "rtmp://192.168.0.12/live/videostream"

My problem is, when ffmpeg finishes encoding the video, it stops the stream, and thus cuts the video short for 5-10 seconds (for short videos), but this gets worse on larger videos.

Is there a way to stop this behavior? I was trying to add a blank 10 second video before and after the original video, but due to some encoding options, I always end up losing audio. And this only kind-of works on the short videos, but on longer videos the problem is still there.

Any recommendations?

How to process a video to mp4 with ffmpeg for quality and compatibility?

$
0
0

I am beginning to be more serious about video. I am processing my videos with ffmpeg in a fully updated Linux into mp4 to use it in HTML5 directly.

Now, I have old AVI videos that I want to convert to mp4 with ffmpeg for use with HTML5. In particular, I have this one:

http://luis.impa.br/photo/1101_aves_ce/caneleiro-de-chapeu-preto_femea_Quixada-CE-110126-E_05662+7a.avi

(I know, terrible quality... sorry). According to ffprobe:

Duration: 00:01:35.30, start: 0.000000, bitrate: 1284 kb/s
Stream #0:0: Video: mpeg4 (Simple Profile) (DX50 / 0x30355844), yuv420p, 640x480 [SAR 1:1 DAR 4:3], 1144 kb/s, 30 fps, 30 tbr, 30 tbn, 30 tbc
Stream #0:1: Audio: mp3 (U[0][0][0] / 0x0055), 44100 Hz, stereo, s16p, 128 kb/s

That seems perfect: mpeg4 video and mp3 audio. So I tried:

ffmpeg -i input.avi -acodec copy -vcodec copy output.mp4

It generates a file that plays nicely in mplayer, but not in firefox getting an error:

Video format or MIME type not supported.

Chrome plays the audio, but no video is shown... Now, if I do:

ffmpeg -i input.avi output.mp4

firefox works, but the video is reencoded in another one with half the size (half the bitrate). This is what ffprobe says about the reencoded video:

major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf57.71.100
Duration: 00:01:35.30, start: 0.000000, bitrate: 685 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 640x480 [SAR 1:1 DAR 4:3], 548 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc (default)
Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)

I suppose that I am loosing lots of quality (and time processing the video). So, my questions:

  1. Why are browsers not playing my video with the copy codecs ?

  2. Can I work with ffmpeg in this particular file without reencoding? If yes, how?

  3. If I need to reencode, which are "reasonable" parameters to keep close to the original quality? Would something like

    ffmpeg -i input.avi -b:v 1024k -bufsize 1024k output.mp4

suffice for this video? This generates a new video with size closer to the original one.

Thanks!

-- http://luis.impa.br/photo/1101_aves_ce/caneleiro-de-chapeu-preto_femea_Quixada-CE-110126-E_05662+7a.avi

ffmpeg exited with code 1: Input/output error

$
0
0

On my backend Node server, I am trying to read a stream from an online radio station, and save the audio as a file in my computer. As streams don't have start/end, I'm choosing to record an arbitrary value of 5 seconds.

For this problem, I'm using the fluent-ffmpeg npm package for ffmpeg wrapper. Also, the @ffmpeg-installer/ffmpeg npm package to point to the directory of my local ffmpeg binary. As stream input, I'm reading from 89 FM - A Rádio Rock, a radio from São Paulo, BR, with the stream available at http://26593.live.streamtheworld.com/RADIO_89FMAAC.aac

Here is the code I put together:

import ffmpeg from "fluent-ffmpeg"
import ffmpegInstaller from "@ffmpeg-installer/ffmpeg"
ffmpeg.setFfmpegPath(ffmpegInstaller.path);
console.log(ffmpegInstaller.path)
console.log(ffmpegInstaller.url)
console.log(ffmpegInstaller.version)

const pathInput = "http://26593.live.streamtheworld.com/RADIO_89FMAAC.aac"
// const pathInput = "./tmp/demo.avi"
const pathOutput = "./tmp/output.m4a"

ffmpeg()
 .input(pathInput)
 .duration(5)
 .save(pathOutput)
 .on("start", function(command) {
 console.log("Spawned Ffmpeg with command: "+ command)
 })
 .on("error", function (err) {
 console.log("An error occurred: "+ err.message)
 })
 .on("end", async function () {
 console.log("Processing finished!")
 })

On my local machine (Mac, running darwin-x64), the program outputs OK:

/.../node_modules/@ffmpeg-installer/darwin-x64/ffmpeg
https://evermeet.cx/ffmpeg/
92718-g092cb17983
Spawned Ffmpeg with command: ffmpeg -i http://26593.live.streamtheworld.com/RADIO_89FMAAC.aac -y -t 5 ./tmp/record.m4a
Processing finished!

On my docker container (linux-x64), which I need to deploy the project, the ffmpeg returns an error:

/usr/app/node_modules/@ffmpeg-installer/linux-x64/ffmpeg
https://www.johnvansickle.com/ffmpeg/
20181210-g0e8eb07980
Spawned Ffmpeg with command: ffmpeg -i http://26593.live.streamtheworld.com/RADIO_89FMAAC.aac -y -t 5 ./tmp/output.m4a
main-1 | An error occurred: ffmpeg exited with code 1: http://26593.live.streamtheworld.com/RADIO_89FMAAC.aac: Input/output error

Dockerfile for the container:

FROM node:alpine

WORKDIR /usr/app

COPY package*.json ./

# Install ffmpeg in the container:
RUN apk update
RUN apk add ffmpeg

RUN npm install

COPY . .

CMD [ "npm", "run", "dev" ]

Obs: this error seems to happen only for stream -> file save. When using as input a path from a local .avi file (thus, converting from .avi to .m4a), both localhost and docker versions run OK.

Has anyone has any clue as to why this error happens on this version? Or how I can run a ffmpeg command, server-side, on a docker container, to record a radio stream?

-- fluent-ffmpeg npm package, @ffmpeg-installer/ffmpeg npm package, 89 FM - A Rádio Rock, http://26593.live.streamtheworld.com/RADIO_89FMAAC.aac

Issue with adding FFMPEG to macOS path to convert mp3 to WAV in Python

$
0
0

I am trying to use a simple program to convert a .MP3 file to a .WAV file in python:

from os import path
from pydub import AudioSegment

src = "test1.mp3"
dst = "test1.wav"

# convert wav to mp3 
sound = AudioSegment.from_mp3(src)
sound.export(dst, format="wav")

When I try to run the program in an IDE, I am getting a FileNotFoundError this is what the StackTrace looks like:

RuntimeWarning: Couldn't find ffprobe or avprobe - defaulting to ffprobe, but may not work
 warn("Couldn't find ffprobe or avprobe - defaulting to ffprobe, but may not work", RuntimeWarning)

...

[Errno 2] No such file or directory: 'ffprobe': 'ffprobe'

I know this error means that the IDE is not able to locate my FFMPEG (because it is not in the correct path). I downloaded FFMPEG, FFPROBE is located in the FFMPEG library using:

pip install FFMPEG

However, even when I try to run the program from the command line using

python soundtest.py

I am getting another FileNotFoundError:

File "soundtest.py", line 13, in 
 with open('simple.html') as html_file:
FileNotFoundError: [Errno 2] No such file or directory: 'simple.html'

I am not sure why this is happening because despite FFMPEG not being in the correct path to be accessed by the IDE I cannot run it from the command line either.

Ideally, I would like to add the FFMPEG library to the system path so that it can be accessed, but I can't even get it to run from the command line. Any ideas what's going on here?

The answer provided here also helps give more insight, but I am still running into my same error, Ffmpeg on idle python Mac

ffmpeg - Seek to absolute time stamp in MPEG DASH segment

$
0
0

I need to extract short audio segments at specific time stamps from specific dash segments.

I tried the following:

ffmpeg -ss 00:10:00 -i segment_x.m4s -t 10 out.mp3

-ss seeks relative to the segment's start time however, and not absolutely. That absolute time data is there however as ffmpeg prints it during the conversation like this:

Duration: 00:10:14.01, start: 595.018667, bitrate: 7 kb/s

How can I make ffmpeg extract the audio from exactly 00:10:00 to 00:10:10?

FFMPEG Concatenating videos with same 25fps results in output file with 3.554fps

$
0
0

I created an AWS Lambda function in node.js 18 that is using a static, ver 7 build of FFmpeg located in a lambda layer. Unfortunately it's just the ffmpeg build and doesn't include ffprobe.

I have an mp4 audio file in one S3 bucket and a wav audio file in a second S3 bucket. I'm uploading the output file to a third S3 bucket.

Specs on the files (please let me know if any more info is needed)

Audio: wav, 13kbps, aac (LC), 6:28 duration

Video: mp4, 1280x720 resolution, 25 frame rate, h264 codec, 3:27 duration

Goal: Create blank video to fill in the duration gaps so the full audio is covered before and after the mp4 video (using timestamps and duration). Strip the mp4 audio and use the wav audio only. Output should be an mp4 video with the wav audio playing over it and blank video for 27 seconds (based on timestamp) until mp4 video plays for 3:27, and then blank video to cover the rest of the audio until 6:28.

Actual Result: An mp4 file with 3.554 frame rate and 10:06 duration.

import { S3Client, GetObjectCommand, PutObjectCommand } from "@aws-sdk/client-s3";
import { createWriteStream, createReadStream, promises as fsPromises } from 'fs';
import { exec } from 'child_process';
import { promisify } from 'util';
import { basename } from 'path';

const execAsync = promisify(exec);

const s3 = new S3Client({ region: 'us-east-1' });

async function downloadFileFromS3(bucket, key, downloadPath) {
 const getObjectParams = { Bucket: bucket, Key: key };
 const command = new GetObjectCommand(getObjectParams);
 const { Body } = await s3.send(command);
 return new Promise((resolve, reject) => {
 const fileStream = createWriteStream(downloadPath);
 Body.pipe(fileStream);
 Body.on('error', reject);
 fileStream.on('finish', resolve);
 });
}

async function uploadFileToS3(bucket, key, filePath) {
 const fileStream = createReadStream(filePath);
 const uploadParams = { Bucket: bucket, Key: key, Body: fileStream };
 try {
 await s3.send(new PutObjectCommand(uploadParams));
 console.log(`File uploaded successfully to ${bucket}/${key}`);
 } catch (err) {
 console.error("Error uploading file: ", err);
 throw new Error('Failed to upload file to S3');
 }
}

function parseDuration(durationStr) {
 const parts = durationStr.split(':');
 return parseInt(parts[0]) * 3600 + parseInt(parts[1]) * 60 + parseFloat(parts[2]);
}

export async function handler(event) {
 const videoBucket = "video-interaction-content";
 const videoKey = event.videoKey;
 const audioBucket = "audio-call-recordings";
 const audioKey = event.audioKey;
 const outputBucket = "synched-audio-video";
 const outputKey = `combined_${basename(videoKey, '.mp4')}.mp4`;

 const audioStartSeconds = new Date(event.audioStart).getTime() / 1000;
 const videoStartSeconds = new Date(event.videoStart).getTime() / 1000;
 const audioDurationSeconds = event.audioDuration / 1000;
 const timeDifference = audioStartSeconds - videoStartSeconds;

 try {
 const videoPath = `/tmp/${basename(videoKey)}`;
 const audioPath = `/tmp/${basename(audioKey)}`;
 await downloadFileFromS3(videoBucket, videoKey, videoPath);
 await downloadFileFromS3(audioBucket, audioKey, audioPath);

 //Initialize file list with video
 let filelist = [`file '${videoPath}'`];
 let totalVideoDuration = 0; // Initialize total video duration

 // Create first blank video if needed
 if (timeDifference < 0) {
 const blankVideoDuration = Math.abs(timeDifference);
 const blankVideoPath = `/tmp/blank_video.mp4`;
 await execAsync(`/opt/bin/ffmpeg -f lavfi -i color=c=black:s=1280x720:r=25 -c:v libx264 -t ${blankVideoDuration} ${blankVideoPath}`);
 //Add first blank video first in file list
 filelist.unshift(`file '${blankVideoPath}'`);
 totalVideoDuration += blankVideoDuration;
 console.log(`First blank video created with duration: ${blankVideoDuration} seconds`);
 }
 
 const videoInfo = await execAsync(`/opt/bin/ffmpeg -i ${videoPath} -f null -`);
 const videoDurationMatch = videoInfo.stderr.match(/Duration: ([\d:.]+)/);
 const videoDuration = videoDurationMatch ? parseDuration(videoDurationMatch[1]) : 0;
 totalVideoDuration += videoDuration;

 // Calculate additional blank video duration
 const additionalBlankVideoDuration = audioDurationSeconds - totalVideoDuration;
 if (additionalBlankVideoDuration > 0) {
 const additionalBlankVideoPath = `/tmp/additional_blank_video.mp4`;
 await execAsync(`/opt/bin/ffmpeg -f lavfi -i color=c=black:s=1280x720:r=25 -c:v libx264 -t ${additionalBlankVideoDuration} ${additionalBlankVideoPath}`);
 //Add to the end of the file list
 filelist.push(`file '${additionalBlankVideoPath}'`);
 console.log(`Additional blank video created with duration: ${additionalBlankVideoDuration} seconds`);
 }

 // Create and write the file list to disk
 const concatFilePath = '/tmp/filelist.txt';
 await fsPromises.writeFile('/tmp/filelist.txt', filelist.join('\n'));

 const extendedVideoPath = `/tmp/extended_${basename(videoKey)}`;
 //await execAsync(`/opt/bin/ffmpeg -f concat -safe 0 -i /tmp/filelist.txt -c copy ${extendedVideoPath}`);
 
 // Use -vsync vfr to adjust frame timing without full re-encoding
 await execAsync(`/opt/bin/ffmpeg -f concat -safe 0 -i ${concatFilePath} -c copy -vsync vfr ${extendedVideoPath}`);

 const outputPath = `/tmp/output_${basename(videoKey, '.mp4')}.mp4`;
 //await execAsync(`/opt/bin/ffmpeg -i ${extendedVideoPath} -i ${audioPath} -map 0:v:0 -map 1:a:0 -c:v copy -c:a aac -b:a 192k -shortest ${outputPath}`);

 await execAsync(`/opt/bin/ffmpeg -i ${extendedVideoPath} -i ${audioPath} -map 0:v:0 -map 1:a:0 -c:v copy -c:a aac -b:a 192k -shortest -r 25 ${outputPath}`);
 console.log('Video and audio have been merged successfully');

 await uploadFileToS3(outputBucket, outputKey, outputPath);
 console.log('File upload complete.');

 return { statusCode: 200, body: JSON.stringify('Video and audio have been merged successfully.') };
 } catch (error) {
 console.error('Error in Lambda function:', error);
 return { statusCode: 500, body: JSON.stringify('Failed to process video and audio.') };
 }
}

Attempts: I've tried re-encoding the concatenated file but the lambda function times out. I hoped that by creating blank video with a 25fps and all the other specs from the original mp4, I wouldn't have to re-encode the concatenated file. Obviously something is wrong, though. In the commented out code you can see I tried specifying 25 or not, and also tried -vsync and no -vsync. I'm new to FFmpeg so all tips are appreciated!

lavu/float_dsp: add double-precision scalar product

$
0
0
lavu/float_dsp: add double-precision scalar product The function pointer is appended to the structure for backward binary
compatibility. Fortunately, this is allocated by libavutil, not by the
user, so increasing the structure size is safe.
  • [DH] libavutil/float_dsp.c
  • [DH] libavutil/float_dsp.h

checkasm/float_dsp: add double-precision scalar product

$
0
0
checkasm/float_dsp: add double-precision scalar product
  • [DH] tests/checkasm/float_dsp.c

lavfi: get rid of bespoke double scalar products

$
0
0
lavfi: get rid of bespoke double scalar products
  • [DH] libavfilter/aap_template.c
  • [DH] libavfilter/anlms_template.c
  • [DH] libavfilter/arls_template.c

lavu/float_dsp: R-V V scalarproduct_double

$
0
0
lavu/float_dsp: R-V V scalarproduct_double C908:
scalarproduct_double_c: 39.2
scalarproduct_double_rvv_f64: 10.5 X60:
scalarproduct_double_c: 35.0
scalarproduct_double_rvv_f64: 5.2
  • [DH] libavutil/riscv/float_dsp_init.c
  • [DH] libavutil/riscv/float_dsp_rvv.S

lavu/lls: use ff_scalarproduct_double_c()

$
0
0
lavu/lls: use ff_scalarproduct_double_c()
  • [DH] libavutil/lls.c

avcodec/packet: remove reference to old AV_SIDE_DATA_PARAM_CHANGE_ values

$
0
0
avcodec/packet: remove reference to old AV_SIDE_DATA_PARAM_CHANGE_ values They were forgotten in 65ddc74988245a01421a63c5cffa4d900c47117c. Signed-off-by: James Almer 
  • [DH] libavcodec/packet.h

Can I set rotation field for a video stream with FFmpeg?

$
0
0

I have a video file. I open it with MediaInfo utility and I can see a video stream in this file having attribute Rotation 90 (along with other attributes such as CodecID, bitrate etc).

Now I have another video file which does not have that attribute Rotation 90, it does not have the Rotation attribute at all.

Can I use ffmpeg.exe so that it produces output file with Rotation 90 attribute added and with no other changes? I don't really want to do any transform, just want to set the Rotation attribute.

I've tried the -metadata option to no avail.

How to extract a frame from an INSV format 360 video using Python

$
0
0

I have a 360 video captured with an Insta360 X3, which is in the INSV format. I would like to extract a frame from this video using Python, FFmpeg, or any other suitable tool, without using Insta360 Studio to export the video to MP4 first.

Here is what I have tried so far:

FFmpeg: I attempted to use FFmpeg to directly convert the INSV file to images, but I encountered errors, possibly due to the proprietary nature of the INSV format.

ffmpeg -i input.insv -vf "select=eq(n,0)" -q:v 3 output.jpg

This command did not work as expected and produced an error.

Python Libraries: I looked into various Python libraries such as OpenCV and MoviePy, but they do not natively support INSV format.

import cv2

cap = cv2.VideoCapture('input.insv')
success, frame = cap.read()
if success:
 cv2.imwrite('output.jpg', frame)

This code did not work, as OpenCV could not open the INSV file.

Could anyone provide guidance on how to directly extract frames from an INSV format 360 video using Python, FFmpeg, or any other tool?

Viewing all 118201 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>