Quantcast
Channel: MediaSPIP
Viewing all 117669 articles
Browse latest View live

Evolution #4566 (Nouveau): Sélecteur d'article anté-chronologique

$
0
0

Sélectionner un article dans un site qui en comporte beaucoup avec le sélecteur d'articles spip, c'est relou !

Tout simplement parce que les derniers articles apparaissent tout à la fin d'une pagination par 100.

Tout cela pourrait être modifié en ajoutant un tout petit ridicule critère {!par date} dans la boucle située ici :
prive/formulaire/selecteur/inc-nav-articles.html


Fluent-ffmpeg : Adding gif animation as overlay between x time to y time

$
0
0

Im a beginner to ffmpeg. I want to set a gif animation as an overlay of input video from a x seconds to y second . I tried the following codes

var wmimage= 'public/source/watermark_file.gif';

ffmpeg('public/source/small.mp4')
.addOption(['-ignore_loop 0', '-i '+wmimage+ '','-filter_complex [0:v][1:v]overlay=10:10:shortest=1:'])
.save('public/video/output-video2.mp4');

This gives me gif animation overlay from start to end of the input video length. but i need to show the gif for a duration (eg: from :2second to: 5second). So i tried to adding

enable="between(t,2,5)" at

.addOption(['-ignore_loop 0', '-i '+wmimage+ '','-filter_complex [0:v][1:v]overlay=10:10:shortest=1:enable="between(t,2,5)"'])

But it throws

Error: ffmpeg exited with code 1: Error initializing complex filters.
Invalid argument

I tried the enable option before overlay and shortest. but gives the same error.

Any help will appreciated.

FFMPEG Hwaccel error with -hwaccel_output_format

$
0
0

I have Nvidia 1050Ti gpu

I test ffmpeg with cuda

ffmpeg -hwaccel nvdec -hwaccel_output_format cuda -i input.mp4 -y \ 
-c:v h264_nvenc -c:a libmp3lame -b:v 3M \
-filter_complex hwdownload,scale=w=iw*min(1280/iw\,720/ih):h=ih*min(1280/iw\,720/ih),hwupload out.mp4

Error:

[hwupload @ 00000199b49c1080] A hardware device reference is required to upload frames to.
[Parsed_hwupload_2 @ 000001999ba8ee80] Query format failed for 'Parsed_hwupload_2': Invalid argument
Error reinitializing filters!
Failed to inject frame into filter network: Invalid argument
Error while processing the decoded data for stream #0:0
Conversion failed!

I want full hardware transcode without using CPU.

What is KeyError: 'twoColumnSearchResultsRenderer' (Python, youtube_dl)

$
0
0

I have been writing a Discord bot in Python using discord.py version 1.5.0. Only recently have I began adding music commands to it with youtube_dl, youtube_search, and ffmpeg. I can play the audio entirely, as well as pull titles, durations, thumbnails, etc. The only problem I have had thus far is that when I store the Youtube URL in a list and then index it on the "next" command then it returns an error saying "KeyError: 'twoColumnSearchResultsRenderer'" and I cannot find any other examples of this being returned back to the user. If someone could help me understand what the issue is then that would be greatly appreciated.

edit:

@bot.command()
 async def play(ctx, *args):
 #local variables
 global queueList
 server = ctx.message.guild
 voiceChannel = server.voice_client
 keywords = listtostring(args)
 results = ast.literal_eval(YoutubeSearch(keywords, max_results=1).to_json())
 test = results['videos']
 print(test[0])
 vidId = test[0]['id']
 vidThumbnail = test[0]['thumbnails'][0]
 vidUrl = "https://www.youtube.com/watch?v={}".format(vidId)
 queueList.append(vidUrl)

 if args and ctx.message.author.voice:

 async with ctx.typing():

 player = await YTDLSource.from_url(queueList[0], loop=client.loop)
 await ctx.send("`{}` added to the queue!".format(player.title))

 try: 
 voiceChannel.play(player, after=lambda e: print('Player error: %s' % e) if e else None)
 nowPlaying = discord.Embed(
 title = "Now playing",
 description = "Now playing **{}**".format(player.title),
 color = rupertColor
 )
 nowPlaying.set_thumbnail(url=vidThumbnail)

 await ctx.send(embed = nowPlaying) #('**Now playing:** {}'.format(player.title))
 except: 
 print("failed playing")

 del(queueList[0])

 else:
 await stop(ctx)
 async with ctx.typing():
 player = await YTDLSource.from_url(str(queueList[0]), loop=client.loop)
 try: 
 voiceChannel.play(player, after=lambda e: print('Player error: %s' % e) if e else None)
 await ctx.send('**Now playing:** {}'.format(player.title))
 except: 
 print("failed playing")
 queueList.append(vidUrl)

 del(queueList[0])

How to save the stream URL chunks as a video in NodeJs

$
0
0

I'm trying to download a piece of video from the below URL using axios.

https://r5---sn-gwpa-civy.googlevideo.com/videoplayback?...&range=32104-500230

I'm using below code to fetch the above URL and for save the chunk to the mp4 file.

axios({
 url: MY_URL`,
 method: 'GET',
 responseType: 'stream'
 }).then(function (response) {
 let buffArray = []
 response.data.on('data', (chunk) => {
 buffArray .push(chunk)
 })

 response.data.on('end', (chunk) => {
 let buff = Buffer.concat(buffArray)
 fs.writeFile('output.mp4', buff, (dt) => {
 console.log("File created");
 })
 })
 })

above code is working fine and I'm able to save the video but the video is not able to play. when I set the range in URL from &range=32104-500230 to &range=0-500230 then video is working properly. Please try to help that how can I save a video from small chunk of video stream instead of saving full video.

pyglet's video example application not working

$
0
0

I'm trying to get the pyglet video player example to work, but I'm getting the following error:

pyglet.media.codecs.ffmpeg.FFmpegException: avformat_open_input in ffmpeg_open_filename returned an error opening file /home/ce/Downloads/sample-mp4-file.mp4 Error code: -1094995529

I decoded this number, and it turns out that it corresponds to the "INDA" error code in FFMPEG's error.h, which means

Invalid data found when processing input

The example video that I used can be downloaded here. I tried the mp4 one, the avi one and the webm one. I also tried other files that I have locally. They all work in other video players, and in fact some were created using FFMPEG.

Finally, I used pyglet.media.have_ffmpeg() to make sure that pyglet agrees with me that I have ffmpeg installed.

I have tried it with both ffmpeg 3.4 and ffmpeg 4.1.

What can I do?

-- video player example, error.h, here

Convert M3U8 playlist to MP4 using GStreamer

$
0
0

I'm trying to convert an HLS playlist to MP4 file. The .ts files in the list are guaranteed to be h264/aac and with the same resolution (for cases when there is a EXT-X-DISCONTINUITY tag).

this is the closes I got to a working pipeline:

gst-launch-1.0 mp4mux name=mux ! filesink location=x.mp4 souphttpsrc location="https://remote/path/to/index.m3u8" ! decodebin name=decode ! videoconvert ! queue ! x264enc ! mux. decode. ! audioconvert ! avenc_aac ! mux.

Don't really know if the result is valid as this command line gets GStreamer to play the HLS in playing time instead of fast forward and ingest as fast as possible (the list is closed with #EXT-X-ENDLIST).

Second issue is that it looks to me like this pipeline is encoding the stream instead of just coping it. I don't need it to encode, only change the container. H264/aac in the .TS files is what I also need in the .MP4 file.

So, is it possible to only copy and not transcode using as-fast-as-u-can ingestion and not real-life speed?

Basically, I am trying to find the GStreamer equivalent to this FFmpeg command:

FFmpeg -i "https://remote/path/to/index.m3u8" -c copy x.mp4

(I have to use GStreamer and not FFmpeg.)

Up-mix Stereo to 5.1 with FFMPEG, filtering each channel [closed]

$
0
0

I developed this to transform stereo to 5.1:

ffmpeg -i d:\man.mp4 -c:v mpeg2video -b:v 8M -maxrate 12M -bufsize 4M -filter_complex "pan=5.1|FL=FL|FR=FR|FC

Note: this also does an mp4 to mpeg2 conversion

The issue is that the dialog is on all the channels and it doesn't sound like real 5.1. What I would like to do is apply a bandpass filter to just the center channel to focus on the dialog piece. I then want to apply an opposite notch filter to the rest of the channels to focus on everything but the dialog.

Been doing a lot of searching and coming up empty. Many thanks for a nudge in the right direction.


gifs not loops in ffmpeg movie filter

$
0
0

I have a gif that loops infinite(loop.gif). I want to overlay this gif to top left corner of video.mpg. So I am using this code to make this:

ffmpeg -i video.mpg -vf "movie=loop.gif [logo]; [in][logo] overlay=10:10 [out]" -vcodec mpeg2video out.mpg

The problem is; gif loops only 1 time and last frame of the gif showing until end of video.mpg.

How can I loop this gif continuously?

FFMPEG converts incorrect mp4 to ts

$
0
0

I have a video.MOV in h264

When I converts this video to .mp4 with -c:v copy - all is ok.

ffmpeg -i video.mp4 -c:v copy -c:a copy output.mp4

But if I converts to .ts with same -c:v copy - I receive rotated video.

ffmpeg -i video.mp4 -c:v copy -c:a copy output.ts // output is rotaed...WTF?

If I specify -c:v libx264 - all is ok to.

ffmpeg -i video.mp4 -c:libx264 -c:a copy output.ts

Output of this command contains expected info:

ffmpeg ffmpeg -i video.MOV -c:v copy -c:a copy output.ts

Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.MOV':
....
 Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080, 15633 kb/s, 29.97 fps, 29.97 tbr, 600 tbn, 1200 tbc (default)
 Metadata:
 rotate : 90
 ...
 encoder : H.264
 Side data:
 displaymatrix: rotation of -90.00 degrees
 ...

Output #0, segment, to 'output.ts':
 ...
 Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080, q=2-31, 15633 kb/s, 29.97 fps, 29.97 tbr, 90k tbn, 600 tbc (default)
 Metadata
 rotate : 90
 ...
 encoder : H.264
 Side data:
 displaymatrix: rotation of -90.00 degrees

I receive almost same info while converting to mp4, but here all is ok and there isn't differences in rotation while playing in player.

So what's wrong with converting to ts? The end goal is segmentation mov file to m3u8 playlist with ts segments and here there is same problem, so I provided more simple example.

I am using ffmpeg java library to convert captured screenshots to video. Video output is blurry

$
0
0

I am using ffmpeg java library to convert captured screenshots to video. Video which is generated as output is blurry.

I am using bit rate as 9000, frames per sec as 25 and video size as that of desktop screen size.

Any suggestions on how to solve this issue.

P.S. I cannot use ffmpeg.exe and command line due to certain restrictions and hence I am opting for ffmpeg java library.

Any suggestions on the issue or suggestions on any better approach will be helpful.

 import java.awt.AWTException;
 import java.awt.Dimension;
 import java.awt.FlowLayout;
 import java.awt.Rectangle;
 import java.awt.Robot;
 import java.awt.Toolkit;
 import java.awt.event.ActionEvent;
 import java.awt.event.ActionListener;
 import java.awt.image.BufferedImage;
 import java.io.File;
 import java.io.IOException;
 import java.util.Date;
 
 import javax.imageio.ImageIO;
 import javax.swing.JButton;
 import javax.swing.JFrame;
 import javax.swing.JLabel;
 import javax.swing.JOptionPane;
 
 import org.bytedeco.javacpp.avcodec;
 import org.bytedeco.javacv.FFmpegFrameRecorder;
 import org.bytedeco.javacv.OpenCVFrameConverter;
 
 public class ScreenRecorder{
 
 public static boolean videoComplete=false;
 public static String inputImageDir="inputImgFolder"+File.separator;
 public static String inputImgExt="png";
 public static String outputVideo="recording.mp4"; 
 public static int counter=0;
 public static int imgProcessed=0;
 public static FFmpegFrameRecorder recorder=null;
 public static int videoWidth=1920;
 public static int videoHeight=1080;
 public static int videoFrameRate=3;
 public static int videoQuality=0; // 0 is the max quality
 public static int videoBitRate=9000;
 public static String videoFormat="mp4";
 public static int videoCodec=avcodec.AV_CODEC_ID_MPEG4;
 public static Thread t1=null;
 public static Thread t2=null;
 public static JFrame frame=null;
 public static boolean isRegionSelected=false;
 public static int c1=0;
 public static int c2=0;
 public static int c3=0;
 public static int c4=0;
 
 
 public static void main(String[] args) {
 
 try {
 if(getRecorder()==null)
 {
 System.out.println("Cannot make recorder object, Exiting program");
 System.exit(0);
 }
 if(getRobot()==null)
 {
 System.out.println("Cannot make robot object, Exiting program");
 System.exit(0);
 }
 File scanFolder=new File(inputImageDir);
 scanFolder.delete();
 scanFolder.mkdirs();
 
 createGUI();
 } catch (Exception e) {
 System.out.println("Exception in program "+e.getMessage());
 }
 }
 
 public static void createGUI()
 {
 frame=new JFrame("Screen Recorder");
 JButton b1=new JButton("Select Region for Recording");
 JButton b2=new JButton("Start Recording");
 JButton b3=new JButton("Stop Recording");
 JLabel l1=new JLabel("
If you dont select a region then full screen recording
will be made when you click on Start Recording"); l1.setFont (l1.getFont ().deriveFont (20.0f)); b1.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { try { JOptionPane.showMessageDialog(frame, "A new window will open. Use your mouse to select the region you like to record"); new CropRegion().getImage(); } catch (Exception e1) { // TODO Auto-generated catch block System.out.println("Issue while trying to call the module to crop region"); e1.printStackTrace(); } } }); b2.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { counter=0; startRecording(); } }); b3.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { stopRecording(); System.out.print("Exiting..."); System.exit(0); } }); frame.add(b1); frame.add(b2); frame.add(b3); frame.add(l1); frame.setLayout(new FlowLayout(0)); frame.setVisible(true); frame.setSize(1000, 170); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); } public static void startRecording() { t1=new Thread() { public void run() { try { takeScreenshot(getRobot()); } catch (Exception e) { JOptionPane.showMessageDialog(frame, "Cannot make robot object, Exiting program "+e.getMessage()); System.out.println("Cannot make robot object, Exiting program "+e.getMessage()); System.exit(0); } } }; t2=new Thread() { public void run() { prepareVideo(); } }; t1.start(); t2.start(); System.out.println("Started recording at "+new Date()); } public static Robot getRobot() throws Exception { Robot r=null; try { r = new Robot(); return r; } catch (AWTException e) { JOptionPane.showMessageDialog(frame, "Issue while initiating Robot object "+e.getMessage()); System.out.println("Issue while initiating Robot object "+e.getMessage()); throw new Exception("Issue while initiating Robot object"); } } public static void takeScreenshot(Robot r) { Dimension size = Toolkit.getDefaultToolkit().getScreenSize(); Rectangle rec=new Rectangle(size); if(isRegionSelected) { rec=new Rectangle(c1, c2, c3-c1, c4-c2); } while(!videoComplete) { counter++; BufferedImage img = r.createScreenCapture(rec); try { ImageIO.write(img, inputImgExt, new File(inputImageDir+counter+"."+inputImgExt)); } catch (IOException e) { JOptionPane.showMessageDialog(frame, "Got an issue while writing the screenshot to disk "+e.getMessage()); System.out.println("Got an issue while writing the screenshot to disk "+e.getMessage()); counter--; } } } public static void prepareVideo() { File scanFolder=new File(inputImageDir); while(!videoComplete) { File[] inputFiles=scanFolder.listFiles(); try { getRobot().delay(500); } catch (Exception e) { } //for(int i=0;i

Imagepanel.java

import java.awt.Dimension;
import java.awt.Graphics;
import java.awt.Image;
import javax.swing.ImageIcon;
import javax.swing.JPanel;

class ImagePanel
 extends JPanel
{
 private Image img;
 
 public ImagePanel(String img)
 {
 this(new ImageIcon(img).getImage());
 }
 
 public ImagePanel(Image img)
 {
 this.img = img;
 Dimension size = new Dimension(img.getWidth(null), img.getHeight(null));
 
 setPreferredSize(size);
 setMinimumSize(size);
 setMaximumSize(size);
 setSize(size);
 setLayout(null);
 }
 
 public void paintComponent(Graphics g)
 {
 g.drawImage(this.img, 0, 0, null);
 }
}

CropRegion.java

import java.awt.AWTException;
import java.awt.Dimension;
import java.awt.FlowLayout;
import java.awt.Graphics;
import java.awt.Rectangle;
import java.awt.Robot;
import java.awt.Toolkit;
import java.awt.event.MouseEvent;
import java.awt.event.MouseListener;
import java.awt.event.MouseMotionListener;
import java.awt.image.BufferedImage;
import java.io.IOException;
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.swing.JOptionPane;


public class CropRegion implements MouseListener,
 MouseMotionListener {

 int drag_status = 0;
 int c1;
 int c2;
 int c3;
 int c4;
 JFrame frame=null;
 static int counter=0;
 JLabel background=null;

 
 public void getImage() throws AWTException, IOException, InterruptedException {
 Dimension size = Toolkit.getDefaultToolkit().getScreenSize();
 Robot robot = new Robot();
 BufferedImage img = robot.createScreenCapture(new Rectangle(size));
 ImagePanel panel = new ImagePanel(img);
 frame=new JFrame();
 frame.add(panel);
 frame.setLocation(0, 0);
 frame.setSize(size);
 frame.setLayout(new FlowLayout());
 frame.setUndecorated(true);
 frame.setVisible(true);
 frame.addMouseListener(this);
 frame.addMouseMotionListener(this);
 frame.setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE);
 }

 public void draggedScreen() throws Exception {
 ScreenRecorder.c1=c1;
 ScreenRecorder.c2=c2;
 ScreenRecorder.c3=c3;
 ScreenRecorder.c4=c4;
 ScreenRecorder.isRegionSelected=true;
 JOptionPane.showMessageDialog(frame, "Region Selected.Please click on Start Recording button to record the selected region.");
 frame.dispose();
 }

 public void mouseClicked(MouseEvent arg0) {
 }

 public void mouseEntered(MouseEvent arg0) {
 }

 public void mouseExited(MouseEvent arg0) {
 }

 public void mousePressed(MouseEvent arg0) {
 paint();
 this.c1 = arg0.getX();
 this.c2 = arg0.getY();
 }

 public void mouseReleased(MouseEvent arg0) {
 paint();
 if (this.drag_status == 1) {
 this.c3 = arg0.getX();
 this.c4 = arg0.getY();
 try {
 draggedScreen();
 } catch (Exception e) {
 e.printStackTrace();
 }
 }
 }

 public void mouseDragged(MouseEvent arg0) {
 paint();
 this.drag_status = 1;
 this.c3 = arg0.getX();
 this.c4 = arg0.getY();
 }

 public void mouseMoved(MouseEvent arg0) {
 }

 public void paint() {
 Graphics g = frame.getGraphics();
 frame.repaint();
 int w = this.c1 - this.c3;
 int h = this.c2 - this.c4;
 w *= -1;
 h *= -1;
 if (w < 0) {
 w *= -1;
 }
 g.drawRect(this.c1, this.c2, w, h);
 }
}

setup rtmp on centos without nginx

$
0
0

I am completely new on this section . I want to know is it possible to setup rtmp on centos server without nginx ? my webserver is lightspeed and I couldn't find any article about it . all articles are about nginx+rtmp. I want to setup an rtmp server and stream from obs or wircast to this rtmp and finally want to stream it with ffmpeg .Can some one help me on this case ?

Is it possible to skip ads from twitch with Python using streamlink & ffmpeg

$
0
0

I have the following code that pulls 60 seconds from the ESLCSGO twitch stream and downloads it to "output.mp4":

import streamlink, subprocess

streams = streamlink.streams("twitch.tv/ESL_CSGO")

audio = streams["best"]

subprocess.call("ffmpeg -i "+ str(audio.url) + " -t 60 -c copy -bsf:a aac_adtstoasc output.mp4 -y")

I've noticed that if I manually clip on the .m3u8 generated by streamlink.streams() I don't get any ads, but once ffmpeg converts it to an .mp4 there's ads. Is there any way to circumvent this?

ffmpeg compile for visual studio 2012

$
0
0

Was trying to compile ffmpeg using vs2012 , its saying unsupported ms version used 2013 or above .

also tried compiling with mingw , got the shared libraries , but unable to link with vs 2012 project giving unresolved symbols error .

error LNK2019: unresolved external symbol _avcodec_get_class referenced in function _wmain : error LNK2019: unresolved external symbol _avcodec_find_encoder_by_name referenced in function _wmain

not sure how to proceed . could nay help on this pls .

tried the way specified in the link https://trac.ffmpeg.org/wiki/CompilationGuide/MSVC but could not make it .

-- https://trac.ffmpeg.org/wiki/CompilationGuide/MSVC

Anomalie #4564 (Fermé): spip 3.2.8 : create_function deprecated


Evolution #4567 (Nouveau): balise introduction

$
0
0

avoir l'equivalent de la balise introduction par exemple balise introduction_texte qui va chercher l'introduction uniquement dans le texte avec éventuellement le texte entre intro car ce filtre coupe correctement le texte sans laisser des raccourcis et autre element genant
ou pouvoir couper le texte sans que les raccourcis sur des paragraphes restent dans le texte couper

ffmpeg dts_delta_threshold and aresample=async=1

$
0
0

I am using ffmpeg to encode livestreams for use in a tvheadend server. ffmpeg and hls discontinuities dont work, but ive fixed that using streamlink to read the hls stream then pipe that into ffmpeg.

Sometimes the audio has gaps in the live stream and the audio goes out of sync from that point on, I have managed to fix this using aresample=async=1. ffmpeg inserts silence for the gaps and audio stays synced.

Tvheadend doesnt like dts discontinuities and the stream will freeze whenever one is encountered. I have also fixed this with dts_delta_threshold 1. With this option the stream plays seamlessly without any freezes

Here is where my problem comes in when using dts_delta_threshold 1 the aresample command no longer works, I assume because there are no more gaps so it cant insert the silence. Ive tried various combinations and ordering of options.

Is there any way to apply the aresample=async=1 and also the dts_delta_threshold 1 command after.

This is my current command

streamlink -l warning --ringbuffer-size 64M --hls-timeout 100000000 --hls-live-restart hls://192.168.10.1/play/$1.$2.m3u8 best -O | \
ffmpeg -loglevel fatal -err_detect ignore_err \
-f mpegts -i - \
-filter_complex "eq=contrast=${3:-1.0}" \
-c:v libx264 -crf 18 -preset superfast -tune zerolatency -pix_fmt yuv420p -force_key_frames "expr:gte(t,n_forced*2)" \
-c:a aac -b:a 256k -ac 2 -af aresample=async=1 \
-metadata service_provider=$1 -metadata service_name="$1.$2" -f mpegts pipe:1

Ive tried putting the dts_delta_threshold before and after input, same thing audio goes out of sync if there is a gap in audio. Ive tried putting async 1 before input but that doesnt work either

ffmpeg hardwareaccleration using --enable-nvenc giving errors in vs 2012

$
0
0

am trying to compile ffmpeg with nvenc enabled using msvc toolchain with vs 2012 , its giving error ,

ERROR: nvenc requested but not found

following error logs is seen in config.log:

c99wrap cl -D_ISOC99_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -Dstrtod=avpriv_strtod -Dsnprintf=avpriv_snprintf -D_snprintf=avpriv_snprintf -Dvsnprintf=avpriv_vsnprintf -D_USE_MATH_DEFINES -D_CRT_SECURE_NO_WARNINGS -D_CRT_NONSTDC_NO_WARNINGS -D_WIN32_WINNT=0x0502 -nologo -Dstrtoll=_strtoi64 -Dstrtoull=_strtoui64 -Z7 -W4 -wd4244 -wd4127 -wd4018 -wd4389 -wd4146 -wd4057 -wd4204 -wd4706 -wd4305 -wd4152 -wd4324 -we4013 -wd4100 -wd4214 -wd4307 -wd4273 -wd4554 -wd4701 -O2 -Oy- -FIstdlib.h -c -Fo./ffconf.VPz0M2D2.o ./ffconf.hc8dkH6N.c
ffconf.hc8dkH6N.c
ffconf.VPz0M2D2.o_converted.c
./ffconf.hc8dkH6N.c(1) : warning C4431: missing type specifier - int assumed. Note: C no longer supports default-int
./ffconf.hc8dkH6N.c(1) : error C2054: expected '(' to follow 'inline'
./ffconf.hc8dkH6N.c(1) : error C2085: 'foo' : not in formal parameter list
./ffconf.hc8dkH6N.c(1) : error C2143: syntax error : missing ';' before '{'
check_host_cc
BEGIN ./ffconf.hc8dkH6N.c
 1 static inline int foo(int a) { return a; }
END ./ffconf.hc8dkH6N.c
c99wrap cl -nologo -W4 -wd4244 -wd4127 -wd4018 -wd4389 -wd4146 -wd4057 -wd4204 -wd4706 -wd4305 -wd4152 -wd4324 -we4013 -wd4100 -wd4214 -wd4307 -wd4273 -wd4554 -wd4701 -O3 -c -Fo./ffconf.VPz0M2D2.o ./ffconf.hc8dkH6N.c
cl : Command line warning D9002 : ignoring unknown option '-O3'
ffconf.hc8dkH6N.c
cl : Command line warning D9002 : ignoring unknown option '-O3'
ffconf.VPz0M2D2.o_converted.c
./ffconf.hc8dkH6N.c(1) : warning C4431: missing type specifier - int assumed. Note: C no longer supports default-int
./ffconf.hc8dkH6N.c(1) : error C2054: expected '(' to follow 'inline'
./ffconf.hc8dkH6N.c(1) : error C2085: 'foo' : not in formal parameter list
./ffconf.hc8dkH6N.c(1) : error C2143: syntax error : missing ';' before '{'
ERROR: nvenc requested but not found

ffmpeg - audio is 500ms behind video in screen recording

$
0
0

I am trying to record my screen along with the audio using ffmpeg 4.3. But in the final output, my audio is around 500ms to 1sec behind the video. Why is this happening? How this can be fixed? Here is the command I am using on Windows 10 machine:

ffmpeg.exe -threads 4 -rtbufsize 1024m -f dshow -i audio="Microphone (Realtek Audio)" -f gdigrab -offset_x 0 -offset_y 0 -video_size 1920x1080 -framerate 30 -probesize 32 -i desktop -pix_fmt yuv420p -c:v libx264 -crf 28 -preset ultrafast -tune zerolatency -movflags +faststart test.mp4

ffmpeg - video is out of sync(slowed down) after converting from .mov to .avi or .mp4

$
0
0

I needed to convert video from .mov to any other format cause it won't play properly on Windows. I tried many ways, but it always ended up being either black screen or out of sync (audio worked properly always). Here's some things i tried:

ffmpeg -i input.mov -vcodec copy output.mov

ffmpeg -i input.mov -q:v 1 output.avi

ffmpeg -i input.mov -vcodec mpeg4 output.avi ffmpeg -i input.mov -vcodec libx264 output.avi

Speed the video up using ffmpeg -i input.mov -vf "setpts=0.5*PTS" output.avi

Transcode to a constant framerate before converting ffmpeg -i input.mov -r 60 input.mov

I tried those things for both .avi and .mp4

Viewing all 117669 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>