Quantcast
Channel: MediaSPIP
Viewing all 117689 articles
Browse latest View live

lavfi/avgblur_opencl: fix using uninitialized value

$
0
0
lavfi/avgblur_opencl: fix using uninitialized value Fixed using uninitialized value "global_work[0]" when calling "av_log". Fixes CID #1437471.
  • [DH] libavfilter/vf_avgblur_opencl.c

Dynamic Noise level selection with ffmpeg

$
0
0

I've used ffmpeg with the following filter parameters to find silences in files:

ffmpeg -i audio.wav -af silencedetect=n=-25dB

If n (noise level) is selected appropriately, this filter detects silences very well.

I'm seeking a dynamic method to obtain this value. Is there any other ffmpeg filter that can be helpful to do so?

Restrict FFmpeg/FFserver stream to logged user

$
0
0

I have a live feed setup with FFmpeg streaming audio/video from a webcam through a FFserver. Also, I have a Apache server running a website with a login page, all on the same machine.

The question is: how can I protect this live stream over a user authentication, so that my camera doesn't go public?

The goal is to provide the resource over http://myexternalip/camera-for-auth-user, for example. The login routes are working fine, but anyone with the stream link (e.g. http://myexternalip:1099/camera.webm) can watch the stream.

In the website adding a video element with a local reference, after a user authentication:

obviously fails, since the remote client tries to access the resource on itself. However, I think some sort of local redirect, or maybe don't use FFServer at all, would meet my needs, but I couldn't manage to find out how.

Download m3u8 file from hotstar link in php or nodejs [on hold]

$
0
0

How can we download videos from hotstar streaming link in PHP or JS Or Nodejs? Please help if have some knowledge.

Thanks

Anomalie #4159 (Nouveau): empêcher l'upgrade en 3.1 si le code n'est plus compatible avec la vers...

Convert audio files to mp3 using ffmpeg

$
0
0

I need to convert audio files to mp3 using ffmpeg.

When i write the command as ffmpeg -i audio.ogg -acodec mp3 newfile.mp3, I get the error:

FFmpeg version 0.5.2, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 1 / 52.20. 1 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 built on Jun 24 2010 14:56:20, gcc: 4.4.1
Input #0, mp3, from 'ZHRE.mp3': Duration: 00:04:12.52, start: 0.000000, bitrate: 208 kb/s Stream #0.0: Audio: mp3, 44100 Hz, stereo, s16, 256 kb/s
Output #0, mp3, to 'audio.mp3': Stream #0.0: Audio: 0x0000, 44100 Hz, stereo, s16, 64 kb/s
Stream mapping: Stream #0.0 -> #0.0
Unsupported codec for output stream #0.0

I also ran this command :

 ffmpeg -formats | grep mp3

and got this in response:

FFmpeg version 0.5.2, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 1 / 52.20. 1 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 built on Jun 24 2010 14:56:20, gcc: 4.4.1 DE mp3 MPEG audio layer 3 D A mp3 MP3 (MPEG audio layer 3) D A mp3adu ADU (Application Data Unit) MP3 (MPEG audio layer 3) D A mp3on4 MP3onMP4 text2movsub remove_extra noise mov2textsub mp3decomp mp3comp mjpegadump imxdump h264_mp4toannexb dump_extra

I guess that the mp3 codec isnt installed. Am I right here ? Can anyone help me out here ?

avformat/mov: remove modulo operations from mov_estimate_video_delay()

$
0
0
avformat/mov: remove modulo operations from mov_estimate_video_delay() 0.324 <-0.491 sec Reviewed-by: Derek Buitenhuis 
Reviewed-by: Sasi Inguva 
Signed-off-by: Michael Niedermayer 
  • [DH] libavformat/mov.c

avformat/mov: Eliminate variable buf_size from mov_estimate_video_delay()

$
0
0
avformat/mov: Eliminate variable buf_size from mov_estimate_video_delay() Reviewed-by: Derek Buitenhuis 
Reviewed-by: Sasi Inguva 
Signed-off-by: Michael Niedermayer 
  • [DH] libavformat/mov.c

How to use ffmpeg to encode multi-channel video?

$
0
0

Like nomral video have RGB/YUV, 3 channels. Is it possible use the existing video convertor to encode more than 3 channel video? (e.g. given 5 folders of the same number and resolution pictures, generate a 5-channel video from them) I not need to playback the 5-channel video, which is impossible for 3-channel display. I just need to encode it and then decode it back to images.

Dose any existing video codec support this manipulation? Or how should I rewrite some part of the exsiting video codec(some light weight implementation of H264) to support it?

FFMPEG Create internal pipeline for adding raw frames to AVI file (no input file)

$
0
0

I have an application that reads in a raw video file, does some image processing to each frame, then feeds the resulting BGRA-format byte[] frames to the FFMPEG container to eventually create an AVI file. Since this process works slightly differently than any other FFMPEG example I've seen in that it does not have an existing input file, I'm wondering if anyone knows how to do this.

I initialize the FFMPEG container:

ProcessBuilder pBuilder = new ProcessBuilder(raid.getLocation() + "\\ffmpeg\\bin\\ffmpeg.exe", "-r", "30", "-vcodec", "rawvideo", "-f", "rawvideo", "-pix_fmt", "bgra", "-s", size, "-i", "pipe:0", "-r", "30", "-y", "-c:v", "libx264", "C:\export\2015-02-03\1500\EXPORT6.avi"); try { process = pBuilder.start(); } catch (IOException e) { e.printStackTrace(); } ffmpegInput = process.getOutputStream();

For each incoming byte[] array frame, I add the frame to the container ("src" is a BufferedImage that I'm converting to a byte array):

try
{ ByteArrayOutputStream baos = new ByteArrayOutputStream(); ImageIO.write(src, ".png", baos); ffmpegInput.write(baos.toByteArray());
}
catch (IOException e)
{ e.printStackTrace();
}

And once the video is finished loading frames, I close the container:

try
{ ffmpegInput.flush(); ffmpegInput.close();
}
catch (IOException e)
{ e.printStackTrace();
}

The AVI file is created but it displays an error when opening. The FFMPEG logger displays this as the error:

ffmpeg version N-71102-g1f5d1ee Copyright (c) 2000-2015 the FFmpeg developers built with gcc 4.9.2 (GCC) configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-lzma --enable-decklink --enable-zlib libavutil 54. 20.101 / 54. 20.101 libavcodec 56. 30.100 / 56. 30.100 libavformat 56. 26.101 / 56. 26.101 libavdevice 56. 4.100 / 56. 4.100 libavfilter 5. 13.101 / 5. 13.101 libswscale 3. 1.101 / 3. 1.101 libswresample 1. 1.100 / 1. 1.100 libpostproc 53. 3.100 / 53. 3.100
Input #0, rawvideo, from 'pipe:0': Duration: N/A, bitrate: 294912 kb/s Stream #0:0: Video: rawvideo (BGRA / 0x41524742), bgra, 640x480, 294912 kb/s, 30 tbr, 30 tbn, 30 tbc
No pixel format specified, yuv444p for H.264 encoding chosen.
Use -pix_fmt yuv420p for compatibility with outdated media players.
[libx264 @ 00000000003bcbe0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2
[libx264 @ 00000000003bcbe0] profile High 4:4:4 Predictive, level 3.0, 4:4:4 8-bit
Output #0, avi, to 'C:\export\2015-02-03\1500\EXPORT6.avi': Metadata: ISFT : Lavf56.26.101 Stream #0:0: Video: h264 (libx264) (H264 / 0x34363248), yuv444p, 640x480, q=-1--1, 30 fps, 30 tbn, 30 tbc Metadata: encoder : Lavc56.30.100 libx264
Stream mapping: Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
frame= 0 fps=0.0 q=0.0 Lsize= 6kB time=00:00:00.00 bitrate=N/A video:0kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Output file is empty, nothing was encoded (check -ss / -t / -frames parameters if used)

Any insight or ideas would be greatly appreciated!

how to stream I420 raw video using ffmpeg and rtp?

$
0
0

I'm working on a project, and I need to stream I420 raw video through rtp, because the receiver is using Gstreamer rtpvrawdepay element. But I have no idea. Many thanks!

Stream frame from video to pipeline and publish it to HTTP mjpeg via ffmpeg

$
0
0

Let's say I have very simple program which has been written in C++ with usage of OpenCV 3.4 under Windows 10.

VideoCapture cap("test.avi");
Mat frame; while(true){ if (!cap.read(frame)) { break; } // SEND FRAME TO PIPE
}

It's just simple example of reading frame by frame avi video, but in the end it's going to be server-side application which produces modified stream from few ip cameras. I want to use html5 video tag to display output directly on website, but it's quite hard to find useful information related with that topic ( for Windows ). If I understand it correctly I need to define pipeline and send there MJPEG stream, with help of FFMPEG, where FFMPEG will create local HTTP server on specific port. Anyone ever challenged similar task under Windows? I guess that 80% of task is related with proper usage of ffmpeg command line tool, one of my priorities is minimal modification of application.

So to make long story short, I have application which I can call directly from command line :

stream_producer.exe CAMERA_1 

and I want to be able to see MJPEG stream under :

http://localhost:1234

which can be displayed on local website in intranet.

Regards.

FFmpeg matching encoding output between 2 vidoes

$
0
0

I am trying to replace a boot video from my device however I'm unable to get both videos to match despite my best efforts which causes the video not to show at all and I'm sure that the issue is caused by an encoding difference between the videos.

The original file output from ffprobe is as follows:

Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '[FOLDER]/1_powerup_2017_main.mp4':
Metadata: major_brand : mp42 minor_version : 0 compatible_brands: mp42isomavc1 creation_time : 2016-12-07T20:39:51.000000Z encoder : HandBrake 0.9.9 2013051800 Duration: 00:00:11.01, start: 0.000000, bitrate: 4789 kb/s Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, bt709), 1080x1920 [SAR 1:1 DAR 9:16], 4648 kb/s, 24 fps, 24 tbr, 90k tbn, 180k tbc (default)
Metadata: creation_time : 2016-12-07T20:39:51.000000Z encoder : JVT/AVC Coding
Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 140 kb/s (default)
Metadata: creation_time : 2016-12-07T20:39:51.000000Z

Which I used the following command to attempt to create:

ffmpeg -i [INPUT] -vf setsar=1,format=yuv420p -r 24 -c:v libx264 -profile:v main -brand mp42 -color_primaries bt709 -color_trc bt709 -colorspace bt709 [OUTPUT] 

This command creates a video with the following ffprobe output:

Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '[FOLDER]/1_powerup_2017_main.mp4':
Metadata: major_brand : mp42 minor_version : 512 compatible_brands: isomiso2avc1mp41 encoder : Lavf57.83.100 Duration: 00:00:06.34, start: 0.000000, bitrate: 988 kb/s Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, bt709), 1080x1920 [SAR 1:1 DAR 9:16], 972 kb/s, 24 fps, 24 tbr, 12288 tbn, 48 tbc (default)
Metadata: handler_name : VideoHandler Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 9 kb/s (default)
Metadata: handler_name : SoundHandler

Is this to do with the compatible_brands or is there a Handbrake preset that could be used that I didn't notice?

FFMPEG: avcodec_send_packet(); error while using multithread

$
0
0

I wrote 2 threads to decode RTSP stream from IP camera as below:

RTSP_read_paket function used to read Packets from RTSP link, packets stored in a queue named Packet_buf.

std::queue Packet_buf;
bool pkt_pending_k = false; int RTSP_read_packet (string url)
{ rtsp_init(url); int ret; AVPacket packet; av_init_packet(&packet); while(1) { ret = av_read_frame(pFormatCtx,&packet); if(ret==0) { if (packet.stream_index == video_stream_index) { Packet_buf.push(packet); if((ready1 == false)) { ready1 = true; conv1.notify_one(); } } av_packet_unref(&packet); cout<<"number of RTSP packet: "<

ffmpeg_decode read packets from Packet_buf to decode frames

AVFrame ffmpeg_decode( void )
{ AVPacket avpkt; av_init_packet(&avpkt); int ret; conv1.wait(lk1,[]{return ready1;}); while(1) { while(1) { ret = avcodec_receive_frame(pCodecCtx,pFrame); if(ret == AVERROR(EAGAIN)||ret==AVERROR_EOF){ break; } return pFrame; } if(!Packet_buf.empty()) { if(pkt_pending_k == false) { avpkt = Packet_buf.front(); Packet_buf.pop(); }else{ pkt_pending_k = false; } } ret = avcodec_send_packet(pCodecCtx, &avpkt); //program halting here cout<<"-------------> ret = "<

My program halt at line:

ret = avcodec_send_packet(pCodecCtx, &avpkt);

Anyone can help me find the problem, thanks !

Cannot install ffmpeg via brew on macOS Mojave

$
0
0
  • Xcode 10 beta installed
  • Command line tool for macOS 10.14 installed
  • macOS_SDK_headers_for_macOS_10.14.pkg installed
  • brew update done;

When trying to install ffmpeg by brew install ffmpeg, I had un error below :

tar: Error exit delayed from previous errors.
Error: Failure while executing: tar xf /Users/myname/Library/Caches/Homebrew/texi2html-5.0.tar.gz -C /private/tmp/texi2html-20180712-39689-hmsh90

Which seems like a problem of Homebrew texi2html-5.0.tar.gz.

Then using brew info ffmpeg :

ffmpeg: stable 4.0.1, HEAD
Play, record, convert, and stream audio and video
https://ffmpeg.org/
Not installed
From: https://github.com/Homebrew/homebrew-core/blob/master/Formula/ffmpeg.rb
==> Dependencies
Build: nasm ✔, pkg-config ✔, texi2html ✘
Recommended: lame ✘, x264 ✘, xvid ✘
Optional: chromaprint ✘, fdk-aac ✘, fontconfig ✘, freetype ✘, frei0r ✘, game-music-emu ✘, libass ✘, libbluray ✘, libbs2b ✘, libcaca ✘, libgsm ✘, libmodplug ✘, librsvg ✘, libsoxr ✘, libssh ✘, libvidstab ✘, libvorbis ✘, libvpx ✘, opencore-amr ✘, openh264 ✘, openjpeg ✘, openssl ✘, opus ✘, rtmpdump ✘, rubberband ✘, sdl2 ✘, snappy ✘, speex ✘, tesseract ✘, theora ✘, two-lame ✘, wavpack ✘, webp ✘, x265 ✘, xz ✔, zeromq ✘, zimg ✘, srt ✘

As is not working, so I just downloaded static FFmpeg binaries for macOS 64-bit from https://evermeet.cx/ffmpeg/ and moved the exec file to /usr/local/bin/ and now ffmpeg works perfectly.

But I'm still curious to know how to resolve this brew error.

Thanks in advance if anyone has the solution.

-- https://evermeet.cx/ffmpeg/

Saving frames from multiple videos in a specific location using openCV/ffmpeg and Python

$
0
0

I am trying to extract and save the first frame from multiple videos in a specific folder. For now I got the extraction part working but my saving is in BGR instead of the preferred RGB (if I am right).Although, the frames are shown in my notebook as RGB but not as BGR. Also I need to add some variable filename,because at the moment it saves the frames but keeps overwriting the same frame. Can you guys help me with the two specific problems? This is what I got so far:

SOLVED: I got the saving working, output file and colouring

img_rows,img_cols=200,200 listing = os.listdir(r'C:\Users\bomroeland\Desktop\SVWnew\archery\train') # Create a counter
counter = 0
for vid in listing: vid = r"C:/Users/bomroeland/Desktop/SVWnew/archery/train/"+vid cap = cv2.VideoCapture(vid) for k in range(1): ret, frame = cap.read() rgb =cv2.resize(frame,(img_rows,img_cols)) plt.imshow(rgb) plt.xticks([]), plt.yticks([]) plt.show() pathOut = r"C:/Users/bomroeland/Desktop/SVWnew - Copy/archery/train" cv2.imwrite(pathOut + "/frame%d.jpg" % counter, rgb) counter += 1 if cv2.waitKey(1) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows()

How to get output from ffmpeg process in c#

$
0
0

In the code I written in WPF, I run some filter in FFmpeg, If I run the command in terminal (PowerShell or cmd prompt) It will give me information line by line what's going on.

I am calling the process from C# code and it's work fine. The problem I have with my code is actually I am not able to get any output from the process I run.

I have tried some answers from StackOverflow for FFmpeg process. I see 2 opportunities in my code. I can either fix it by Timer approach or secondly hook an event to OutputDataReceived.

I tried OutputDataReceived event, My code never got it worked. I tried Timer Approach but still, it's not hitting my code. Please check the code below

 _process = new Process { StartInfo = new ProcessStartInfo { FileName = ffmpeg, Arguments = arguments, UseShellExecute = false, RedirectStandardOutput = true, RedirectStandardError = true, CreateNoWindow = true, }, EnableRaisingEvents = true }; _process.OutputDataReceived += Proc_OutputDataReceived; _process.Exited += (a, b) => { System.Threading.Tasks.Task.Run(() => { System.Threading.Tasks.Task.Delay(5000); System.IO.File.Delete(newName); }); //System.IO.File.Delete() }; _process.Start(); _timer = new Timer(); _timer.Interval = 500; _timer.Start(); _timer.Tick += Timer_Tick; } private void Timer_Tick(object sender, EventArgs e) { while (_process.StandardOutput.EndOfStream) { string line = _process.StandardOutput.ReadLine(); } // Check the process. }

FFMPEG split video is not working properly

$
0
0

I am trying to split video then into frames. I am passing starting time and ending time dynamically.

for ex:

ffmpeg -i /Users/mypc/Documents/Avatar/input.mp4 -ss 00:00:39.799 -t 00:00:42.039 /Users/mypc/Downloads/testing/output.mp4

It should cut the video from 39th second to 42 second. Approximately 3 seconds. But, it's splitting more than 3 seconds. I am stuck why it's behaving like that.

Am i missing something in my command or anything?

Please suggest

Screen shot of my terminal attached :

enter image description here

-- enter image description here

Google speech to text api is partially converting .flac file into text

$
0
0

Steps followed:

  1. Converted .mp3 to .flac using ffmpeg.
  2. Ran this command gs://xxx/xxx.flac --language-code=en-US --async --encoding=FLAC --sample-rate=44100.
  3. After processing it, is showing result in JSON format, but its not relevant to audio file.

JSON result looks like:

{ "@type": "xxx", "results": [ { "alternatives": [ { "confidence": 0.71890223, "transcript": "I reports everybody." } ] }, { "alternatives": [ { "confidence": 0.5876879, "transcript": "dear, it's your" } ] }
......
}]}

Can please someone help me in figuring out why it is not converting the audio files correctly? Am I missing any tags?.

Adding silence between words in audio file using ffmpeg

$
0
0

What I am trying to do is to concatwav files which contain short audios. I am able to concat them into one file, but I am trying to set each file at a specific time.

Currently, I can concat the files but I can't place each one at the specific time they need to be. I thought maybe I can just add the right amount of silence between them and solve the problem this way. I am new to ffmpeg

I have a text file with the file names i.e. text.txt

file a.wav
file b.wav
file c.wav

and I use this cmd:

ffmpeg -f concat -i text.txt out.mp3

This works, but is there a way to add a specified number of minutes of silence between them?

I tried to put this in the text file, but it didn't work:

file a.wav
inpoint 5
outpoint 10
file b.wav
inpoint 10
outpoint 20
file c.wav
inpoint 20
outpoint 25
Viewing all 117689 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>