Quantcast
Channel: MediaSPIP
Viewing all 117670 articles
Browse latest View live

avformat/utils: add missing FF_API_LAVF_AVCTX check

$
0
0
avformat/utils: add missing FF_API_LAVF_AVCTX check Signed-off-by: James Almer 
  • [DH] libavformat/utils.c

avutil/pixdesc: add missing FF_API_PSEUDOPAL check

$
0
0
avutil/pixdesc: add missing FF_API_PSEUDOPAL check Signed-off-by: James Almer 
  • [DH] libavutil/pixdesc.h

ffmpeg errors in the daemon

$
0
0

I created a shell script to compress a video using ffmpeg(4.3.1).

ffmpeg -y -i \
 '/var/www/System/Backend/Outputs/TempSaveMovie/200703_4_short_5fr_p2(100_20)_r(50_20).mp4' \
 -vcodec h264 -an \
 '/var/www/System/Backend/Outputs/MovieOutputs/200703_4_short_5fr_p2(100_20)_r(50_20).mp4'

If you run this code from the console, it will run without problems. In fact, we're using the python subscript.call() to execute it. It works fine too.

cmd = 'sh /var/www/System/Backend/cv2toffmpeg.sh'
subprocess.call(cmd, shell=True)

Secondly, if I run it from a daemonized python program, I'll get an error. I get the following error. You'll get an error like this

Input #0, mov,mp4,m4a,3gp,3g2,mj2, from './Outputs/TempSaveMovie/200703_4_short_5fr_p2(100_20)_r(50_20).mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2mp41
 encoder : Lavf58.35.100
 Duration: 00:00:06.15, start: 0.000000, bitrate: 10246 kb/s
 Stream #0:0(und): Video: mpeg4 (Simple Profile) (mp4v / 0x7634706D), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 10244 kb/s, 13 fps, 13 tbr, 13312 tbn, 13 tbc (default)
 Metadata:
 handler_name : VideoHandler
Stream mapping:
 Stream #0:0 -> #0:0 (mpeg4 (native) -> h264 (h264_nvenc))
Press [q] to stop, [?] for help
[mpeg4 @ 0x55cec17c5480] header damaged
[mpeg4 @ 0x55cec17c6840] header damaged
[mpeg4 @ 0x55cec1855f80] header damaged
[mpeg4 @ 0x55cec1866e00] header damaged
Output #0, mp4, to './Outputs/MovieOutputs/200703_4_short_5fr_p2(100_20)_r(50_20).mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2mp41
 encoder : Lavf58.45.100
 Stream #0:0(und): Video: h264 (h264_nvenc) (Main) (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], q=-1--1, 2000 kb/s, 13 fps, 13312 tbn, 13 tbc (default)
 Metadata:
 handler_name : VideoHandler
 encoder : Lavc58.91.100 h264_nvenc
 Side data:
 cpb: bitrate max/min/avg: 0/0/2000000 buffer size: 4000000 vbv_delay: N/A
Error while decoding stream #0:0: Invalid data found when processing input
[mpeg4 @ 0x55cec17c8780] header damaged
Error while decoding stream #0:0: Invalid data found when processing input
[mpeg4 @ 0x55cec17c5480] header damaged

I think the problem is when you run it from a daemonized process. There seems to be a similar problem in the past. Ffmpeg does not properly convert videos when run as daemon I would like to ask for your help to solve this problem. Thank you for your help from Japan.

ffmpeg - conflicting libraries or unable to compile C++ project [duplicate]

$
0
0

This is less about C++, and more about specific compiling avcodec and avformat as those are used inside FFmpeg. I'm trying to compile for the first time newly downloaded ffmpeg source. All the linking has been resolved, and the program errors on this specific allocation.

I'm writing my first program - attempt at avformat libraries inside my C++ code. Linker errors on this line:

[ 89%] Linking CXX executable ffmpeg_01
clang: error: no such file or directory: 'VideoToolbox'
clang: error: no such file or directory: 'CoreFoundation'
clang: error: no such file or directory: 'CoreMedia'
clang: error: no such file or directory: 'CoreVideo'
clang: error: no such file or directory: 'CoreServices'
clang: error: no such file or directory: 'OpenGL'
clang: error: no such file or directory: 'CoreImage'
clang: error: no such file or directory: 'AppKit'
clang: error: no such file or directory: 'Foundation'
clang: error: no such file or directory: 'CoreAudio'
clang: error: no such file or directory: 'AVFoundation'
clang: error: no such file or directory: 'CoreGraphics'
make[3]: *** [ffmpeg_01] Error 1

This is the content of my *libavcodec.pc file, and it has an entry of conflicts: which lists all of the items that clang cannot find, I don't know if those are linked somehow. I found a source pointing that similar error appears if I had other forks of ffmpeg, but I do not.

prefix=/usr/local
exec_prefix=${prefix}
libdir=/usr/local/lib
includedir=/usr/local/include

Name: libavcodec
Description: FFmpeg codec library
Version: 58.108.100
Requires: libswresample >= 3.8.100, libavutil >= 56.60.100
Requires.private: 
Conflicts:
Libs: -L${libdir} -lavcodec -liconv -lm -llzma -lz -framework AudioToolbox -L/usr/local/Cellar/x264/r3011/lib -lx264 -pthread -framework VideoToolbox -framework CoreFoundation -framework CoreMedia -framework CoreVideo -framework CoreServices
Libs.private: 
Cflags: -I${includedir}

I don't have experience in cmakefile, I can paste it here if needed. Any help is greatly appreciated, because I'm stuck for the past several days.

Anomalie #4564 (Nouveau): spip 3.2.8 : create_function deprecated

Anomalie #4564: spip 3.2.8 : create_function deprecated

FFmpeg - Libavcodec - unable to compile, errors on files not found, but files are there

$
0
0

This is less about C++, and more about specific compiling avcodec and avformat as those are used inside FFmpeg. I'm trying to compile for the first time ffmpeg project code. All the linking has been resolved, and the program errors our.

I'm writing my first program - attempt at avformat libraries inside my C++ code. Linker errors on this line:

[ 89%] Linking CXX executable ffmpeg_01
clang: error: no such file or directory: 'VideoToolbox'
clang: error: no such file or directory: 'CoreFoundation'
clang: error: no such file or directory: 'CoreMedia'
clang: error: no such file or directory: 'CoreVideo'
clang: error: no such file or directory: 'CoreServices'
clang: error: no such file or directory: 'OpenGL'
clang: error: no such file or directory: 'CoreImage'
clang: error: no such file or directory: 'AppKit'
clang: error: no such file or directory: 'Foundation'
clang: error: no such file or directory: 'CoreAudio'
clang: error: no such file or directory: 'AVFoundation'
clang: error: no such file or directory: 'CoreGraphics'
make[3]: *** [ffmpeg_01] Error 1

This is the content of my *libavcodec.pc file, and it has an entry of conflicts: which lists all of the items that clang cannot find, I don't know if those are linked somehow. I found a source pointing that similar error appears if I had other forks of ffmpeg, but I do not.

prefix=/usr/local
exec_prefix=${prefix}
libdir=/usr/local/lib
includedir=/usr/local/include

Name: libavcodec
Description: FFmpeg codec library
Version: 58.108.100
Requires: libswresample >= 3.8.100, libavutil >= 56.60.100
Requires.private: 
Conflicts:
Libs: -L${libdir} -lavcodec -liconv -lm -llzma -lz -framework AudioToolbox -L/usr/local/Cellar/x264/r3011/lib -lx264 -pthread -framework VideoToolbox -framework CoreFoundation -framework CoreMedia -framework CoreVideo -framework CoreServices
Libs.private: 
Cflags: -I${includedir}

I don't have experience in cmakefile, I can paste it here if needed. Any help is greatly appreciated, because I'm stuck for the past several days.

I have two CMakeLists:

cmake_minimum_required(VERSION 3.17)
project(ffmpeg_01)

set(CMAKE_CXX_STANDARD 14)

add_subdirectory(lib/glfw)
add_subdirectory(lib/FFmpeg)

add_definitions(-DGL_SILENCE_DEPRECIATION)

if(APPLE)
 list(APPEND EXTRA_LIBS
 "-framework OpenGL"
 )
elseif(LINUX)
 list(APPEND EXTRA_LIBS
 "-lGL -lGLU -lX11")
endif()

add_executable(ffmpeg_01 src/main.cpp src/load_frame.cpp)
target_link_libraries(ffmpeg_01 FFmpeg glfw ${EXTRA_LIBS})

and the interface for FFmpeg under FFmpeg sub directory:

cmake_minimum_required(VERSION 3.17)
project(FFMPEG)

find_package(PkgConfig REQUIRED)

pkg_check_modules(AVCODEC REQUIRED IMPORTED_TARGET libavcodec)
pkg_check_modules(AVFORMAT REQUIRED IMPORTED_TARGET libavformat)
pkg_check_modules(AVFILTER REQUIRED IMPORTED_TARGET libavfilter)
pkg_check_modules(AVDEVICE REQUIRED IMPORTED_TARGET libavdevice)
pkg_check_modules(AVUTIL REQUIRED IMPORTED_TARGET libavutil)
pkg_check_modules(SWRESAMPLE REQUIRED IMPORTED_TARGET libswresample)
pkg_check_modules(SWSCALE REQUIRED IMPORTED_TARGET libswscale)

add_library(FFmpeg INTERFACE IMPORTED GLOBAL)

target_link_libraries(FFmpeg INTERFACE PkgConfig::AVCODEC)
target_link_libraries(FFmpeg INTERFACE PkgConfig::AVFORMAT)
target_link_libraries(FFmpeg INTERFACE PkgConfig::AVFILTER)
target_link_libraries(FFmpeg INTERFACE PkgConfig::AVDEVICE)
target_link_libraries(FFmpeg INTERFACE PkgConfig::AVUTIL)
target_link_libraries(FFmpeg INTERFACE PkgConfig::SWRESAMPLE)
target_link_libraries(FFmpeg INTERFACE PkgConfig::SWSCALE)


target_include_directories(FFmpeg INTERFACE
 ${AVCODEC_INCLUDE_DIRS}
 ${AVFORMAT_INCLUDE_DIRS}
 ${AVFILTER_INCLUDE_DIRS}
 ${AVDEVICE_INCLUDE_DIRS}
 ${AVUTIL_INCLUDE_DIRS}
 ${SWRESAMPLE_INCLUDE_DIRS}
 ${SWSCALE_INCLUDE_DIRS}
 )
target_link_options(FFmpeg INTERFACE
 ${AVCODEC_LDFLAGS}
 ${AVFORMAT_LDFLAGS}
 ${AVFILTER_LDFLAGS}
 ${AVDEVICE_LDFLAGS}
 ${AVUTIL_LDFLAGS}
 ${SWRESAMPLE_LDFLAGS}
 ${SWSCALE_LDFLAGS}
 )

How can i stream through ffmpeg a canvas generated in Node.js to youtube/any other rtmp server?

$
0
0

i wanted to generate some images in Node.JS, compile them to a video and stream them to youtube. To generate the images i'm using the node-canvas module. This sounds simple enough, but i wanted to generate the images continuously, and stream the result in realtime. I'm very new to this whole thing, and what i was thinking about doing, after reading a bunch of resources on the internet was:

  1. Open ffmpeg with spawn('ffmpeg', ...args), setting the output to the destination rtmp server
  2. Generate the image in the canvas
  3. Convert the content of the canvas to a buffer, and write it to the ffmpeg process through stdin
  4. Enjoy the result on Youtube

But it's not as simple as that, is it? I saw people sharing their code involving client-side JS running on the browser, but i wanted it to be a Node app so that i could run it from a remote VPS. Is there a way for me to do this without using something like p5 in my browser and capturing the window to restream it? Is my thought process even remotely adequate? For now i don't really care about performance/resources usage. Thanks in advance.

EDIT:

I worked on it for a bit, and i couldn't get it to work... This is my code:

const { spawn } = require('child_process');
const { createCanvas } = require('canvas');
const fs = require('fs');


const canvas = createCanvas(1920, 1080);
const ctx = canvas.getContext('2d');
const ffmpeg = spawn("ffmpeg",
 ["-re", "-f", "png_pipe", "-vcodec", "png", "-i", "pipe:0", "-vcodec", "h264", "-re", "-f", "flv", "rtmp://a.rtmp.youtube.com/live2/key-i-think"],
 { stdio: 'pipe' })

const randomColor = (depth) => Math.floor(Math.random() * depth)
const random = (min, max) => (Math.random() * (max - min)) + min;

let i = 0;
let drawSomething = function () {
 ctx.strokeStyle = `rgb(${randomColor(255)}, ${randomColor(255)}, ${randomColor(255)})`
 let x1 = random(0, canvas.width);
 let x2 = random(0, canvas.width);
 let y1 = random(0, canvas.height);
 let y2 = random(0, canvas.height);
 ctx.moveTo(x1, y1);
 ctx.lineTo(x2, y2);
 ctx.stroke();

 let out = canvas.toBuffer();
 ffmpeg.stdin.write(out)
 i++;
 if (i >= 30) {
 ffmpeg.stdin.end();
 clearInterval(int)
 };
}

drawSomething();
let int = setInterval(drawSomething, 1000);

I'm not getting any errors, neither i am getting any video data from it. I have set up an rtmp server that i can connect to, and then get the stream with VLC, but i don't get any video data. Am i doing something wrong? I Looked around for a while, and i can't seem to find anyone that tried this, so i don't really have a clue...


Anomalie #4564: spip 3.2.8 : create_function deprecated

$
0
0

Et ça pète ou ça ne fait que du warning ?

Anomalie #4564: spip 3.2.8 : create_function deprecated

$
0
0

Sur la liste user, le signalement n'est pas pour la dist mais pour une page du couteau suisse (et pour répondre à ta question quand même : apparemment ça reste pété même sans affichage des deprecated, peut être pour une autre raison spécifique au CS. Mais le code lié plus haut est bien dans la dist)

Evolution #4565 (Nouveau): Traduction des mois ave affdate

$
0
0

Une petite idée d'évolution :

Traduire le mois dans la langue du site lorsque l'on utilise un formatage de type php date (ex : pour le 2/10/2020, [(#DATE|affdate{d F Y})] donnerait "2 octobre 2020" sur un site francophone.

FFmpeg streaming UDP

$
0
0

I'm trying to stream, using FFmpeg, my webcam and audio to a PC in another LAN that connects to mine.

I basically wait for incoming connection in order to acquire IP and port of the other side

 import socket

 localPort = 1234
 bufferSize = 1024

 UDPServerSocket = socket.socket(family=socket.AF_INET, type=socket.SOCK_DGRAM)
 UDPServerSocket.bind(("", localPort)) # Bind to address and port

 while(True):
 bytesAddressPair = UDPServerSocket.recvfrom(bufferSize)
 message = bytesAddressPair[0].decode("utf-8")
 address = bytesAddressPair[1]
 # Sending a reply to client
 UDPServerSocket.sendto(str.encode("Hello"), address)
 break

 UDPServerSocket.close()

Then I try to send the stream with FFmpeg using the same port number both for server(localPort) and client(the one I acquired from address)

 import re
 from threading import Thread
 from subprocess import Popen, PIPE

 def detect_devices():
 list_cmd = 'ffmpeg -list_devices true -f dshow -i dummy'.split()
 p = Popen(list_cmd, stderr=PIPE)
 flagcam = flagmic = False
 for line in iter(p.stderr.readline,''):
 if flagcam:
 cam = re.search('".*"',line.decode(encoding='UTF-8')).group(0)
 cam = cam if cam else ''
 flagcam = False
 if flagmic:
 mic = re.search('".*"',line.decode(encoding='UTF-8')).group(0)
 mic = mic if mic else ''
 flagmic = False
 elif 'DirectShow video devices'.encode(encoding='UTF-8') in line:
 flagcam = True
 elif 'DirectShow audio devices'.encode(encoding='UTF-8') in line:
 flagmic = True
 elif 'Immediate exit requested'.encode(encoding='UTF-8') in line:
 break
 return cam, mic 


 class ffmpegThread (Thread):
 def __init__(self, address):
 Thread.__init__(self)
 self.address = address

 def run(self):
 cam, mic = detect_devices()
 command = 'ffmpeg -f dshow -i video='+cam+':audio='+mic+' -profile:v high -pix_fmt yuvj420p -level:v 4.1 -preset ultrafast -tune zerolatency -vcodec libx264 -r 10 -b:v 512k -s 240x160 -acodec aac -ac 2 -ab 32k -ar 44100 -f mpegts -flush_packets 0 -t 40 udp://'+self.address+'?pkt_size=1316?localport='+str(localPort)
 p = Popen(command , stderr=PIPE)
 for line in iter(p.stderr.readline,''):
 if len(line) <5: break
 p.terminate()

 thread1 = ffmpegThread(address[0]+":"+str(address[1]))
 thread1.start()

While on the other side I have:

 from threading import Thread
 import tkinter as tk
 import vlc

 class myframe(tk.Frame):
 def __init__(self, width=240, height=160):
 self.root = tk.Tk()
 super(myframe, self).__init__(self.root)
 self.root.geometry("%dx%d" % (width, height))
 self.root.wm_attributes("-topmost", 1)
 self.grid()
 self.frame = tk.Frame(self, width=240, height=160)
 self.frame.configure(bg="black")
 self.frame.grid(row=0, column=0, columnspan=2)
 self.play()
 self.root.mainloop()

 def play(self):
 self.player = vlc.Instance().media_player_new()
 self.player.set_mrl('udp://@0.0.0.0:5000')
 self.player.set_hwnd(self.frame.winfo_id())
 self.player.play()

 class guiThread (Thread):
 def __init__(self, nome):
 Thread.__init__(self)
 self.nome = nome

 def run(self):
 app = myframe()

and:

 import socket

 msgFromClient = "Hello UDP Server"
 bytesToSend = str.encode(msgFromClient)
 serverAddressPort = ("MYglobal_IPaddress", 1234)
 bufferSize = 1024
 localPort = 5000

 # Create a UDP socket at client side
 UDPClientSocket = socket.socket(family=socket.AF_INET, type=socket.SOCK_DGRAM) 
 UDPClientSocket.bind(("", localPort))

 UDPClientSocket.sendto(bytesToSend, serverAddressPort)

 msgFromServer = UDPClientSocket.recvfrom(bufferSize)
 msg = msgFromServer[0].decode("utf-8")
 print(msg)
 UDPClientSocket.close()
 gui = guiThread("ThreadGUI")
 gui.start()

I'm not able to reach the client with the stream. I tested everything else so the problem should be in the way I try to reach the client

How to overlay 2 videos with ffmpeg so that overlay starts before the main video?

$
0
0

This command overlays the intro over the main clip and the main clip starts at the 11th second but the intro is not playing before that 11th second but only when the main video starts. I would like to have the intro play before the main clip and also overlay the main clip starting at the 11th second. I simplified the commad (the intro is made transparent so that main clip is actually visible when they overlap).

ffmpeg.exe -y -i ovelay.mov -i main.mp4 -an -filter_complex "[0:v]setpts=PTS-STARTPTS[intro];[1:v]setpts=PTS-STARTPTS+11/TB[main_clip];[main_clip][intro]overlay[out]" -map "[out]""out.mp4"

FFMPEG - make a stable & zero-padded frame counter

$
0
0

I use FFMPEG to print a frame counter on my videos, but I have two issues:

demo GIF

  1. The text stutters
  2. I'd like to have the possibility to have the number zero-padded (I.E. write 001002003 instead of 123).

Code:

@echo off
:again

cd /D %~p1

ffmpeg ^
 -i "%~nx1" ^
 -vf "drawtext=fontfile=arialbd.ttf: text='Frame \: %%{n}': start_number=1: x=(w-tw)/2: y=h-(2*lh): fontcolor=white: fontsize=40: box=1: boxcolor=black@0.4: boxborderw=8" ^
 -c:a copy ^
 "%~p1%~n1_framenumbered.mov"
if NOT ["%errorlevel%"]==["0"] goto:error
echo [92m%~n1 Done![0m

shift
if "%~1" == "" goto:end
goto:again

:error

echo [93mThere was an error. Please check your input file or report an issue on github.com/L0Lock/FFmpeg-bat-collection/issues.[0m
pause
exit 0

:end

cls
echo [92mEncoding succesful. This window will close after 10 seconds.[0m
timeout /t 10

Solutions:

  • use `text='Frame : %{eif:n:d:3}' to get the zero-padded frame count (thanks to this answer)
  • use a monospace font (courrier new is common on Windows)
  • the script was failing to load the font, use the full path instead but without the drive (thanks to this answer):
    • Do: /Windows/Fonts/courbd.ttf
    • Don't C:/Windows/Fonts/courbd.ttf nor use \

"Could not demultiplex stream" in loading Video recorded by Opencv's VideoWriter

$
0
0

My program is

int main(){
 cout <<"Start the process"<< endl;
 cv::VideoCapture vcap("rtsp://root:pass@192.168.0.90/axis-media/media.amp?camera=1");
 cout <<"Camera connection done!"<< endl;
 cv::Mat image, small;
 //Output video
 cv::Size S = cv::Size((int) vcap.get(CV_CAP_PROP_FRAME_WIDTH), (int) vcap.get(CV_CAP_PROP_FRAME_HEIGHT));
 int ex = static_cast(vcap.get(CV_CAP_PROP_FOURCC));
 int fps = vcap.get(CV_CAP_PROP_FPS);
 cout <<"fps "<< fps <<" ex "<< ex << endl;
 cv::VideoWriter outputVideo;
 outputVideo.open("TEST.avi", ex/*CV_FOURCC('X', '2', '6', '4')*/, vcap.get(CV_CAP_PROP_FPS), S, true);
 if(!outputVideo.isOpened()){
 cout <<"Could not open the output video for write"<< endl;
 return -1;
 }

 for(;;){
 if(!vcap.read(image)){
 std::cout <<"No frame"<< std::endl;
 cv::waitKey(0);
 }

 cv::resize(image, small, image.size()/2, 0, 0 , cv::INTER_LINEAR);
 cv::imshow("Display", small);
 cv::waitKey(1);
 outputVideo.write(small);
 if(getkey() == '\n')
 break;
 }
 cout <<"Camera release"<< endl;
 outputVideo.release();
 vcap.release();
 image.release();
 small.release();
 return 0;
}

int ex = static_cast(vcap.get(CV_CAP_PROP_FOURCC)); ex is 0 here.

I can record the TEST.avi, but can't be read by cv::VideoCapture vcap("TEST.avi"); or VLC player or Videos in Ubuntu. The error is "Could not demultiplex stream".

If I changed to

outputVideo.open("TEST.avi", CV_FOURCC('X', '2', '6', '4'), vcap.get(CV_CAP_PROP_FPS), S, true);
outputVideo.open("TEST.avi", CV_FOURCC('P','I','M','1'), vcap.get(CV_CAP_PROP_FPS), S, true);
outputVideo.open("TEST.avi", CV_FOURCC('M', 'P', '4', '2'), vcap.get(CV_CAP_PROP_FPS), S, true);
etc.

all have same problem.

If I set

outputVideo.open("TEST.avi", CV_FOURCC('i', 'Y', 'U', 'V'), vcap.get(CV_CAP_PROP_FPS), S, true);

I have error as Opencv: FFMPEG iYUV is not supported with codec id 14

For

outputVideo.open("TEST.avi", CV_FOURCC('M', 'J', 'P', 'G'), vcap.get(CV_CAP_PROP_FPS), S, true);


OpenCV Error: Assertion failed (img.cols == width && img.rows == height && chann
els == 3) in write, file /home/Softwares/opencv/opencv/modules/videoio/src/
cap_mjpeg_encoder.cpp, line 829
terminate called after throwing an instance of 'cv::Exception'
 what(): /home/Softwares/opencv/opencv/modules/videoio/src/cap_mjpeg_enco
der.cpp:829: error: (-215) img.cols == width && img.rows == height && channels =
= 3 in function write

What could be wrong? Is that my FFMPEG has problem?


How to compress the video and normalise the audio with one command using ffmpeg?

$
0
0

Is there a way to compress the video and normalise the audio with one command using ffmpeg?

ffmpeg create video screens with external subtitle

$
0
0

I run command:

ffmpeg -ss 3 -copyts -n -threads 4 -i "C:\test\My Movie.mp4" -filter_complex "subtitles='C\:/test/my subtitle.srt',sub_charenc=UTF-8,scale=1:360" -vframes 1 "C:\test\ss1.png"

and I got error:

[Parsed_subtitles_0 @ 000002d733af06c0] Shaper: FriBidi 1.0.9 (SIMPLE)
[Parsed_subtitles_0 @ 000002d733af06c0] Using font provider directwrite
[srt @ 000002d733ce0ac0] Invalid UTF-8 in decoded subtitles text; maybe missing -sub_charenc option
[Parsed_subtitles_0 @ 000002d733af06c0] Error decoding: Invalid data found when processing input (ignored)
[srt @ 000002d733ce0ac0] Invalid UTF-8 in decoded subtitles text; maybe missing -sub_charenc option
[Parsed_subtitles_0 @ 000002d733af06c0] Error decoding: Invalid data found when processing input (ignored)
[srt @ 000002d733ce0ac0] Invalid UTF-8 in decoded subtitles text; maybe missing -sub_charenc option
[Parsed_subtitles_0 @ 000002d733af06c0] Error decoding: Invalid data found when processing input (ignored)
[srt @ 000002d733ce0ac0] Invalid UTF-8 in decoded subtitles text; maybe missing -sub_charenc option
[Parsed_subtitles_0 @ 000002d733af06c0] Error decoding: Invalid data found when processing input (ignored)
[srt @ 000002d733ce0ac0] Invalid UTF-8 in decoded subtitles text; maybe missing -sub_charenc option
[Parsed_subtitles_0 @ 000002d733af06c0] Error decoding: Invalid data found when processing input (ignored)
[srt @ 000002d733ce0ac0] Invalid UTF-8 in decoded subtitles text; maybe missing -sub_charenc option
[Parsed_subtitles_0 @ 000002d733af06c0] Error decoding: Invalid data found when processing input (ignored)
[AVFilterGraph @ 000002d7336bf9c0] Error initializing filter 'subtitles' with args 'C\:/test/my subtitle.srt'
Error initializing complex filters.
Invalid data found when processing input

Where is the problem ? I passed the sub_charenc argument but is not recognized.

Update 1

I tried

ffmpeg -ss 3 -copyts -n -threads 4 -i "C:\test\My Movie.mp4" -vf -sub_charenc UTF-8 subtitles="C\:/test/my subtitle.srt" -vframes 1 "C:\test\ss1.png"

I got:

[NULL @ 00000265186d0540] Unable to find a suitable output format for 'UTF-8' UTF-8: Invalid argument

Update 2

ffmpeg -ss 3 -copyts -n -threads 4 -i "C:\test\My Movie.mp4" -vf -sub_charenc UTF-8:subtitles="C\:/test/my subtitle.srt" -vframes 1 "C:\test\ss1.png"

I got:

Output file #0 does not contain any stream

Update 3

ffmpeg -ss 3 -copyts -n -threads 4 -i "C:\test\My Movie.mp4" -filter_complex "subtitles='C\:/test/my subtitle.srt';sub_charenc=UTF-8" -vframes 1 "C:\test\ss1.png"

Option 'sub_charenc' not found

FFmpeg get list of devices thought python

$
0
0

Wassup.
I have shell command.

ffmpeg -list_devices true -f dshow -i dummy -hide_banner

When i run that command i get output data(see text bellow)

[dshow @ 00000281450fbdc0] DirectShow video devices (some may be both video and audio devices)
[dshow @ 00000281450fbdc0] "HD WebCam"
[dshow @ 00000281450fbdc0] Alternative name "@device_pnp_\\?\usb#vid_0408&pid_a060&mi_00#6&391c16c1&0&0000#{65e8773d-8f56-11d0-a3b9-00a0c9223196}\global"
[dshow @ 00000281450fbdc0] DirectShow audio devices

[dshow @ 00000281450fbdc0] "Microphone (Realtek High Definition Audio)"
[dshow @ 00000281450fbdc0] Alternative name "@device_cm_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\wave_{4727F33A-DE04-4706-8312-03696FACC791}"
[dshow @ 00000281450fbdc0] "Stereo mix (Realtek High Definition Audio)"
[dshow @ 00000281450fbdc0] Alternative name "@device_cm_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\wave_{534A8FBC-6C02-4384-B51C-D0363BB7F8FD}"
[dshow @ 00000281450fbdc0] "Microphone (Avsoft Virtual Audio Device)"
[dshow @ 00000281450fbdc0] Alternative name "@device_cm_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\wave_{5CE20B48-361E-4B96-B113-B3E02BA448EC}"
dummy: Immediate exit requested

I have to get list of all audio devices. And i don't want to parse that string, i will be hard. How can i get list of all audio devices using ffmpeg-python module? Thank u.

change OpenCV to ffmpeg

$
0
0

I am using earthcam links for data collection purposes. My implementation works perfectly fine on local but gives me an OpenCV: image broken error on CentOS 7 system. As ffmpeg is used in the background for OpenCV, I want to directly use ffmpeg to read m3u8 file retrieved from streamlink in the following way.

streams = streamlink.streams(stream_link)
q = list(streams.keys())[0]
stream = streams['%s' % q]
video_cap = cv2.VideoCapture(stream.url) ----> # Want to change this line to ffmpeg

An example link that I am using in my problem is - Nyc_5th_Street

Also, In addition, I just need the first frame of the m3u8 file every time I loop it in.

-- Nyc_5th_Street

Anomalie #4564: spip 3.2.8 : create_function deprecated

$
0
0

Si les erreurs sont cachées (ce qui devrait être le cas en prod idéalement), un deprecated n'est absolument pas un problème, une notice très probablement non plus (contrairement à un warning ou error parfois). Donc c'set corrigé en 3.3 déjà depuis un moment, je pense pas qu'on doive s'en occuper en 3.2.

Viewing all 117670 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>