Quantcast
Channel: MediaSPIP
Viewing all 118114 articles
Browse latest View live

NodeJS ffmpeg streaming

$
0
0

I i using ffmpeg and nodejs and i im trying to stream hls stream...but i im unable to do that...does anyone have code to do that? i have successfully stream only one stream using this code:

streamRoutes.get('/stream', function(req, res) {
// Write header
res.writeHead(200, { 'Content-Type': 'video/H264'
}); /*var command = ffmpeg.setFfmpegPath(process.env.PWD + "/node_modules/ffmpeg-static/bin/linux/x64/ffmpeg");*/ // Start ffmpeg
var command = child_process.spawn(process.env.PWD + "/node_modules/ffmpeg-static/bin/linux/x64/ffmpeg", [ "-y", /* OVERWRITE - output file */ "-hide_banner", /* BANNER - hide */ "-loglevel", "quiet", /* LOG - hide */ "-i", "http://streamurl/live/1.ts", /* STREAM - source */ "-vcodec", "copy", "-reset_timestamps", "1", "-movflags", "frag_keyframe+empty_moov", "-f", "mp4", "-"
], { detached: false }); // Pipe the video output to the client response
command.stdout.pipe(res); // Kill the subprocesses when client disconnects
res.on("close", function(){ command.kill();
})

});

But i would like to define output stream file name so that it will be accessable using this link:

http://myserverip:port/1.ts

And so on (each stream will be have its own name)....i see that ffmpeg command have parameter:

-f http://127.0.0.1:8081/1.ts

but i im not sure what code i need to write in nodejs to get this link working for example if my server ip is: 86.69.86.61 then when i enter link i vlc player: http://86.69.86.61:8081/1.ts it need to start playing video.

Examples using ffmpeg in nodejs i see only using stdout and not defined ffmpeg output using -f parameter.

UPDATED QUESTION:

I would like to make restreamer using ffmpeg to enter in ffmpeg this links:

http://169.56.89.65:8001/1.ts
http://58.69.89.78:5026/2.ts
http://63.69.89.78:4012/3.ts

and suppose that my server ip is: 86.69.86.61 i would like when i enter in vlc link:

http://86.69.86.61:8081/1.ts
http://86.69.86.61:8081/2.ts
http://86.69.86.61:8081/3.ts

i start restreaming using ffmpeg manually but i can't get above url working...how to do that in node js? do i need to route 1.ts 2.ts 3.ts to port or?

-- http://86.69.86.61:8081/1.ts

ffmpeg stream chrome kiosk mode ubuntu 16.04 server

$
0
0

I have a weird out-of-sync issue while using ffmpeg to stream to youtube live a chrome browser from an ub untu 16.04 server.

Issue: output video streamed to youtube has audio/video out of sync, sometimes with as much as 3s

Current flow:

1) start pulseaudio - we using something like this to start it:

pulseaudio --start -vvv --disallow-exit --log-target=syslog --high-priority --exit-idle-time=-1 --daemonize

2) start Xvfb

Xvfb :0 -ac -screen 0 1920x1080x24

3) start chrome linux in kiosk mode

google-chrome --kiosk --disable-gpu --incognito --no-first-run --disable-java --disable-plugins --disable-translate --disk-cache-size=$((1024 * 1024)) --disk-cache-dir=/tmp/chrome/ --user-data-dir=/tmp/chrome/ --force-device-scale-factor=1 --window-size=1920,1080 --window-position=0,0 LOCATION_URL

4) start ffmpeg

ffmpeg -y \ -thread_queue_size 8192 -rtbufsize 250M -f x11grab -video_size 1920x1080 -framerate 24 -i :0 \ -thread_queue_size 8192 -channel_layout stereo -f alsa -i pulse \ -c:v libx264 -pix_fmt yuv420p -c:v libx264 -g 48 -crf 24 -filter:v fps=24 -preset ultrafast -tune zerolatency \ -c:a aac -strict -2 -channel_layout stereo -ab 96k -ac 2 -flags +global_header \ -f flv YOUTUBE_LIVE_STREAMING_RTMP

Note: this is running on an amazon ec2 instance, meaning there is no soundcard, so alsa and pulseaudio are creating a dummy audio card. However, the latency does not come from there. Logs:

Nov 25 06:14:22 ip-172-31-29-8 pulseaudio[26602]: [pulseaudio] protocol-native.c: Adjust latency mode enabled, configuring sink latency to half of overall latency.
Nov 25 06:14:22 ip-172-31-29-8 pulseaudio[26602]: [pulseaudio] protocol-native.c: Requested latency=23.22 ms, Received latency=23.22 ms
Nov 25 06:14:22 ip-172-31-29-8 pulseaudio[26602]: [pulseaudio] protocol-native.c: Final latency 69.66 ms = 23.22 ms + 2*11.61 ms + 23.22 ms

At this point, here's what we observed:

  1. if we start ffmpeg exactly after issuing the command to start chrome, we see the DTS errors from ffmpeg. Audio is out of sync with the video and has delay of 3-5seconds AHEAD. We also noticed the out of sync remains the same for the full duration of the stream

  2. if we start ffmpeg after around 10seconds, audio and video are almost in sync. We then manually added a -itsoffset -0.125 to the ffmpeg command and everything is perfect.

Questions:

  1. Why would ffmpeg have so much lag if it's started right after chrome?
  2. Is starting the ffmpeg after 10s or X seconds the expected behavior ? That is, is this because the system needs to wait for audio/video signals to be "ready" or something ?
  3. Is there a way to 100% calculate or know when Chrome is fully ready and start ffmpeg ? We found sometimes it takes 5s, sometimes 10. Depends on the URL we load.
  4. Besides the DTS error that ffmpeg throws, is there any other way to know if audio/video is out-of-sync ? as sometimes we have a delay of between 0.5 to 1s, but ffmpeg does not report anything. And a restart is required to "re-balance" the audio/video inputs and get them back in sync.
  5. Can pulseaudio be the problem in this scenario ?

Thank you

UPDATE Dec 20

We were able to do some tricks to force chrome to start the audio on page load, and that will force connect to pulseaudio. Doing that, plus adding a 3s delay for ffmpeg to start, there is no more delay when ffmpeg starts. However, our app is a webRTC app, and we have a STRANGER thing happening: if we start the page with no webcam/audio, once the webcam/audio is enabled, ffmpeg (while showing no errors) has a delay of 2s or so. While keep talking, in about max 30s, that delay is GONE.

So the new questions are:

  1. Besides the DTS error that ffmpeg throws, is there any other way to know if audio/video is out-of-sync ? as sometimes we have a delay of between 0.5 to 1s, but ffmpeg does not report anything.
  2. What could cause the initial audio/video out of sync issue and then catching up ?

avutil/random_seed: Use uint64 instead of uint8 for struct to avoid potential alignme...

$
0
0
avutil/random_seed: Use uint64 instead of uint8 for struct to avoid potential alignment issues Signed-off-by: Michael Niedermayer 
  • [DH] libavutil/random_seed.c

FFmpeg How to use alimiter Filter?

$
0
0

I cannot find enough documentation on the alimiter filter.

https://ffmpeg.org/ffmpeg-filters.html#alimiter

I used -filter_complex alimiter=limit=0.5 and it applied to the file but it boosted the volume.

I thought it was supposed to hardlimit the volume down?

FFmpeg says through cmd limit range [0.0625 - 1]

ffmpeg -i audio.wav -y -acodec libmp3lame -b:a 320k -ar 44100 -ac 2 -joint_stereo 1 -filter_complex alimiter=limit=0.5 audio.mp3

Here's a look at the two files through Adobe Audition

Original

Original File

FFmpeg alimiter 0.5

FFmpeg alimiter File

-- https://ffmpeg.org/ffmpeg-filters.html#alimiter, Original File, FFmpeg alimiter File

Revision 101247: [Salvatore] [source:_plugins_/verifier/lang/ verifier] Export depuis ...

avutil: Added selftest for libavutil/audio_fifo.c

$
0
0
avutil: Added selftest for libavutil/audio_fifo.c Signed-off-by: Thomas Turner 
Signed-off-by: Michael Niedermayer 
  • [DH] libavutil/Makefile
  • [DH] libavutil/tests/audio_fifo.c
  • [DH] tests/fate/libavutil.mak
  • [DH] tests/ref/fate/audio_fifo

Revision 101246: [Salvatore] [source:_plugins_/saisies/trunk/lang/ saisies] Export depuis ...

Makefile.lite: Fix running of tests

$
0
0
Makefile.lite: Fix running of tests * `test/test_common.sh` is no longer an autogenerated file.
* Move `is_win` setting to `test_common.sh`.
  • [DH] configure.ac
  • [DH] test/common.sh
  • [DH] test/test_flac.sh
  • [DH] test/test_grabbag.sh

More Makefile.lite fixes

$
0
0
More Makefile.lite fixes Patch-from: Robert Kausch 
  • [DH] build/config.mk
  • [DH] src/share/Makefile.am
  • [DH] src/utils/flactimer/Makefile.am

libFLAC/cpu.c: Add CPP guard

$
0
0
libFLAC/cpu.c: Add CPP guard
  • [DH] src/libFLAC/cpu.c

Makefile.lite: Fix running of tests

$
0
0
Makefile.lite: Fix running of tests * Generate `test/common.sh` from `test/common.sh.in`.
* Move `is_win` setting to `test_common.sh`.
  • [DH] test/Makefile.lite
  • [DH] test/common.sh.in
  • [DH] test/test_flac.sh
  • [DH] test/test_grabbag.sh

Need help in FFMPEG with AES encryption

$
0
0

I want to segment a video file, create a playlist, encrypt it with AES 128 bit and stream it and play it using FFMPEG on windows platform. I am able to segment the video file, create playlist. But i need help with AES encryption.

Below is the command i am using:

ffmpeg -i C:\Videos\Sample.mp4 -hls_time 30 -hls_key_info_file file.keyinfo -hls_segment_filename C:\Videos\Output\file%03d.ts C:\Videos\Output\out.m3u8 

In my Output directory, I have 15 segmented video files with .ts extension(file000.ts - file014.ts) and 1 playlist file (out.m3u8).

Now out.m3u8 file is as below:

#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGETDURATION:38
#EXT-X-MEDIA-SEQUENCE:10
#EXTINF:28.653622,
file010.ts
#EXTINF:29.863167,
file011.ts
#EXTINF:37.412378,
file012.ts
#EXTINF:22.814456,
file013.ts
#EXTINF:16.057711,
file014.ts
#EXT-X-ENDLIST 

Why is out.m3u8 file showing only file010.ts - file014.ts ? Why is is not encrypting all the files? What about the remaining files ? What am i missing ?

Anomalie #3879 (En cours): Le champ url d'un auteur doit pouvoir pointer sur une URL interne du site

$
0
0

Bonjour,

La révision [23060] a eu pour conséquence que des urls de type art12, rub12 parfaitement licites pour SPIP (et utilisable par exemple dans les redirections), ne fonctionnent plus pour les auteurs.

Or, il est tout à fait cohérent d'avoir un site où dans la fiche d'un auteur, on indique la rubrique dont il est responsable dans le site via le champ #URL_SITE_AUTEUR

Pour l'utilisateur ayant signaléça après son passage en 3.0.24 [23212], ça lui casse la possibilité d'en créer de nouveau et d'éditer les anciens.

PS : son message sur IRC :

Bonjour, je viens de me rendre compte que sur mon spip 3.0.2423212, dans la page auteur, le champs "Adresse (URL) du site" ne prends plus la valeur "rubriqueXXX" ou "rubXXX", j'ai le message "l'URL du site n'est pas valide.", est ce normal ? un changement récent ?

ffmpeg DirectShow capture 2 pins

$
0
0

Here is ffmpeg DirectShow options

 DirectShow video device options Pin "Capture" pixel_format=yuyv422 min s=720x480 fps=59.9402 max s=720x480 fps=59.9402 pixel_format=yuyv422 min s=720x480 fps=29.97 max s=720x480 fps=29.97 pixel_format=yuyv422 min s=720x576 fps=50 max s=720x576 fps=50 pixel_format=yuyv422 min s=720x576 fps=25 max s=720x576 fps=25 pixel_format=yuyv422 min s=640x480 fps=59.9402 max s=640x480 fps=59.9402 pixel_format=yuyv422 min s=1920x1080 fps=29.97 max s=1920x1080 fps=29.97 pixel_format=yuyv422 min s=1920x1080 fps=25 max s=1920x1080 fps=25 pixel_format=yuyv422 min s=1920x1080 fps=24 max s=1920x1080 fps=24 pixel_format=yuyv422 min s=1280x720 fps=59.9402 max s=1280x720 fps=59.9402 pixel_format=yuyv422 min s=1280x720 fps=50 max s=1280x720 fps=50 Pin "Audio"

What ffmpeg command will capture both Pins?

Update

My device name is 7160 HD Capture ffmpeg -f dshow -i video="7160 HD Capture" out.mp4

Following command works fine:-

ffmpeg -f dshow -s 1280x720 -i video="7160 HD Capture" -rtbufsize 2000M out19.mp4

I tried

ffmpeg -f dshow -s 1280x720 -i "video=7160 HD Capture:audio=7160 HD Capture" -rtbufsize 2000M out20.mp4

it does not work and returns error:-

[dshow @ 000000000250b540] Could not enumerate audio devices. video=7160 HD Capture:audio=7160 HD Capture: Input/output error

I seen that audio Pin has different names on different cards. May be I should explicitly name it.

Update 2

GraphEdit

I do not have Audio capture devices but Video Capture definitely has Audio.

I am able to play that audio pin on default audio device

Decoding AAC audio stream coming from red5 server with javacv-ffmpeg on Android

$
0
0

I am trying to play AAC audio live stream coming from Red5 server, so to decode the audio data i am using Javacv-ffmpeg. Data is received as packets of byte[] Here is what i tried

public Frame decodeAudio(byte[] adata,long timestamp){ BytePointer audio_data = new BytePointer(adata); avcodec.AVCodec codec1 = avcodec.avcodec_find_decoder(avcodec.AV_CODEC_ID_AAC);// For AAC if (codec1 == null) { Log.d("showit","avcodec_find_decoder() error: Unsupported audio format or codec not found: " + audio_c.codec_id() + "."); } audio_c = null; audio_c = avcodec.avcodec_alloc_context3(codec1); audio_c.sample_rate(44100); audio_c.sample_fmt(3); audio_c.bits_per_raw_sample(16); audio_c.channels(1); if ((ret = avcodec.avcodec_open2( audio_c, codec1, (PointerPointer)null)) < 0) { Log.d("showit","avcodec_open2() error " + ret + ": Could not open audio codec."); } if (( samples_frame = avcodec.avcodec_alloc_frame()) == null) Log.d("showit","avcodec_alloc_frame() error: Could not allocate audio frame."); avcodec.av_init_packet(pkt2); samples_frame = avcodec.avcodec_alloc_frame(); avcodec.av_init_packet(pkt2); pkt2.data(audio_data); pkt2.size(audio_data.capacity()); pkt2.pts(timestamp); pkt2.pos(0); int len = avcodec.avcodec_decode_audio4( audio_c, samples_frame, got_frame, pkt2);
} 

But len after decoding returns -1 for first frame and then -22 always.
First packet is like this always

AF 00 12 08 56 E5 00

Further packets are like

AF 01 01 1E 34 2C F0 A4 71 19 06 00 00 95 12 AE AA 82 5A 38 F3 E6 C2 46 DD CB 2B 09 D1 00 78 1D B7 99 F8 AB 41 6A C4 F4 D2 40 51 17 F5 28 51 9E 4C F6 8F 15 39 49 42 54 78 63 D5 29 74 1B 4C 34 9B 85 20 8E 2C 0E 0C 19 D2 E9 40 6E 9C 85 70 C2 74 44 E4 84 9B 3C B9 8A 83 EC 66 9D 40 1B 42 88 E2 F3 65 CF 6D B3 20 88 31 29 94 29 A4 B4 DE 26 B0 75 93 3A 0C 57 12 8A E3 F4 B9 F9 23 9C 69 C9 D4 BF 9E 26 63 F2 78 D6 FD 36 B9 32 62 01 91 19 71 30 2D 54 24 62 A1 20 1E BA 3D 21 AC F3 80 33 3A 1A 6C 30 3C 44 29 F2 A7 DC 9A FF 0F 99 F2 38 85 AB 41 FD C7 C5 40 5C 3F EE 38 70

Couldn't figure out where is the problem, whether in setting the AVcodec context audio_c or setting packet for decoder.

Any help appreciated. Thanks in advance.


Android: How to configure FFMPEG latest version in android studio?

$
0
0

I want to configure FFMPEG in android studio but i cant get any document or link for that. Github on many FFMPEG lib available for android but that all are with old version. And how to run command in android? and i want to know after configure FFMPEG then how can we run FFMPEG commands. Help me for that. thanks advance.

I have used below links but not success in for latest version.

http://writingminds.github.io/ffmpeg-android-java

https://github.com/WritingMinds/ffmpeg-android-java

https://github.com/havlenapetr/FFMpeg

https://github.com/appunite/AndroidFFmpeg

https://github.com/WritingMinds/ffmpeg-android

how to conver ima_adpcm to pcm that can be played

$
0
0

I am using ffmpeg to decoder wav file with ima_adpcm. I use both avresample_convert() and swr_convert() to convert an 8000hz, 1 channel s16p audio sample to 8000hz, 2 channel s16 audio sample. After converting, there is something wrong with it, it can't be played normally.

I had ever converted a wav file with pcm data(AVCODEC_ID_PCM_U8), and it succeed. I would like to know if there some special characteristic for ima_adpcm that I forgot to handle?

Here is some code for dealing with it

m_pAudioResCodecCtx = avresample_alloc_context(); av_opt_set_int(m_pAudioResCodecCtx, "in_channel_layout", m_pAudioCodecCtx->channel_layout, 0); av_opt_set_int(m_pAudioResCodecCtx, "in_sample_fmt", m_pAudioCodecCtx->sample_fmt, 0); av_opt_set_int(m_pAudioResCodecCtx, "in_sample_rate", m_pAudioCodecCtx->sample_rate, 0); av_opt_set_int(m_pAudioResCodecCtx, "out_channel_layout", channel_layout, 0);
av_opt_set_int(m_pAudioResCodecCtx, "out_sample_fmt", AV_SAMPLE_FMT_S16, 0);
av_opt_set_int(m_pAudioResCodecCtx, "out_sample_rate", m_pAudioCodecCtx->sample_rate, 0); av_opt_set_int(m_pAudioResCodecCtx, "internal_sample_fmt", AV_SAMPLE_FMT_FLTP, 0); int ffmpeg_error = avresample_open(m_pAudioResCodecCtx);

...............

if (m_pAudioResCodecCtx != NULL)
{
LPBYTE out_bufs[AV_NUM_DATA_POINTERS] = { 0 };
int out_linesize = 0; int ffmpeg_err = av_samples_fill_arrays(out_bufs, &out_linesize, dst_buf, iChannelsCount * m_iChannelScale, avframe.nb_samples, AV_SAMPLE_FMT_S16, 1); int samples = avresample_convert(m_pAudioResCodecCtx, out_bufs, out_linesize, avframe.nb_samples, (uint8_t**)avframe.data, avframe.linesize[0], avframe.nb_samples); int data_length = samples * iChannelsCount * m_iChannelScale * av_get_bytes_per_sample(AV_SAMPLE_FMT_S16);

and avframe is decoded packet.

How to make ffmpeg exit when Input is broken

$
0
0

I have written a bash script to keep a ffmpeg command up and running

#!/bin/bash
while : do echo `ffmpeg -re -i http://domain.com/index400.m3u8 -vcodec copy -acodec copy -f mpegts udp://127.0.0.1:10000?pkt_size=1316`
done

The problem is, sometimes the input is broken, yet ffmpeg does not exit when that happens so that it is restarted by the above script. Instead what happens is the same process is kept running eventhough it is not transferring any packet to the UDP address (output). And I need to manually go into the terminal and kill it (kill -9 #processID)

I need a way to make ffmpeg kill its own process whenever the input is broken.

Appreciate your help.

YouTube Live ffmpeg DVR

$
0
0

I'm using ffmpeg to stream to YouTube live. I have everything working perfectly.

When I enable DVR on my channel I immediately start receiving playback errors on all iOS devices. When I disable DVR I immediately fix the DVR issue.

I've tried changing the GOV and my KeyFrames. YouTube live's control panel does not indicated any errors.

Anyone else seen this?

Can you put the result of a blackdetect filter in a textfile using ffmpeg?

$
0
0

I'm testing out the "blackdetect" filter in ffmpeg. I want to have the times when the video is black to be read by a script (like actionscript or javascript). I tried:

ffmpeg -i video1.mp4 -vf "blackdetect=d=2:pix_th=0.00" -an -f null -

And I get a nice result in the ffmpeg log:

ffmpeg version N-55644-g68b63a3 Copyright (c) 2000-2013 the FFmpeg developers built on Aug 19 2013 20:32:00 with gcc 4.7.3 (GCC) configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-av
isynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enab
le-iconv --enable-libass --enable-libbluray --enable-libcaca --enable-libfreetyp
e --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --ena
ble-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-l
ibopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libsp
eex --enable-libtheora --enable-libtwolame --enable-libvo-aacenc --enable-libvo-
amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --
enable-libxvid --enable-zlib libavutil 52. 42.100 / 52. 42.100 libavcodec 55. 28.100 / 55. 28.100 libavformat 55. 13.103 / 55. 13.103 libavdevice 55. 3.100 / 55. 3.100 libavfilter 3. 82.100 / 3. 82.100 libswscale 2. 5.100 / 2. 5.100 libswresample 0. 17.103 / 0. 17.103 libpostproc 52. 3.100 / 52. 3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video1.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2avc1mp41 creation_time : 1970-01-01 00:00:00 encoder : Lavf53.13.0 Duration: 00:02:01.54, start: 0.000000, bitrate: 275 kb/s Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 768x432 [
SAR 1:1 DAR 16:9], 211 kb/s, 25 fps, 25 tbr, 25 tbn, 50 tbc Metadata: creation_time : 1970-01-01 00:00:00 handler_name : VideoHandler Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 59 kb/s Metadata: creation_time : 1970-01-01 00:00:00 handler_name : SoundHandler
Output #0, null, to 'pipe:': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2avc1mp41 encoder : Lavf55.13.103 Stream #0:0(eng): Video: rawvideo (I420 / 0x30323449), yuv420p, 768x432 [SAR 1:1 DAR 16:9], q=2-31, 200 kb/s, 90k tbn, 25 tbc Metadata: creation_time : 1970-01-01 00:00:00 handler_name : VideoHandler
Stream mapping: Stream #0:0 -> #0:0 (h264 -> rawvideo)
Press [q] to stop, [?] for help
[null @ 00000000003279a0] Encoder did not produce proper pts, making some up.
[blackdetect @ 0000000004d5e800] black_start:0 black_end:17.08 black_duration:17
.08
[blackdetect @ 0000000004d5e800] black_start:62.32 black_end:121.48 black_durati
on:59.16
frame= 3038 fps=2317 q=0.0 Lsize=N/A time=00:02:01.52 bitrate=N/A
video:285kB audio:0kB subtitle:0 global headers:0kB muxing overhead -100.007543%

And I'm particularly interested in this part:

[blackdetect @ 0000000004e2e340] black_start:0 black_end:17.08 black_duration:17.08
[blackdetect @ 0000000004e2e340] black_start:62.32 black_end:121.48 black_duration:59.16

So my question:

  1. Is there a way to only take the blackdetect filter output and put it in a .txt file?
  2. And if this is possible, is there a way to do this in a statement with multiple video inputs? Like in this example

example:

ffmpeg -f concat -i mylist.txt -c copy concat.mp4

Where mylist.txt is a list of videos:

file 'video1.mp4'
file 'video2.mp4'
file 'video3.mp4'
file 'video4.mp4'

Basically what I want to have is one or more text files containing information about the black frames in every video in this list to be used by another program

Viewing all 118114 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>