I've searched through the internet to view subtitles during playing a movie with ffplay but no luck, or maybe I'm too dumb. The extension file is .mts
thanx
I've searched through the internet to view subtitles during playing a movie with ffplay but no luck, or maybe I'm too dumb. The extension file is .mts
thanx
I can read the frames
obj=cv2.VideoCapture(path)
_,frame=video.read()
But I do not know how to make a bunch of frames in a data structure and turn it into a video
I read about cv2.VideoWriter but could not use it for this purpose
Im using ffmpeg to segment video. and use the following cammand
ffmpeg -i "data/raw_data/000000005.avi" -vf fps=X -f segment -segment_time 0.0333333333333333 -force_key_frames expr:gte(t,n_forced*0.0333333333333333) -reset_timestamps 1 -segment_time_delta 1.0 -c:a copy "test_break_up/audios/%d.wav"
The command above works on windows but when i run that command on ubuntu its throwing bash: syntax error near unexpected token
('`
Can anyone give me guidance on what to do. Thanks
I want to change the Surface preview bottom overlay with gif or image Like Vigo
Like this
Please tell me any sdk or what I am using for this Filter
I am able to change the overlay on the top view using this
PictureCallback cameraPictureCallbackJpeg = new PictureCallback() { @Override public void onPictureTaken(byte[] data, Camera camera) { // TODO Auto-generated method stub Bitmap cameraBitmap = BitmapFactory.decodeByteArray (data, 0, data.length); int wid = cameraBitmap.getWidth(); int hgt = cameraBitmap.getHeight(); // Toast.makeText(getApplicationContext(), wid+""+hgt, Toast.LENGTH_SHORT).show(); Bitmap newImage = Bitmap.createBitmap (wid, hgt, Bitmap.Config.ARGB_8888); Canvas canvas = new Canvas(newImage); canvas.drawBitmap(cameraBitmap, 0f, 0f, null); Drawable drawable = getResources().getDrawable (R.drawable.mark3); drawable.setBounds(20, 30, drawable.getIntrinsicWidth()+20, drawable.getIntrinsicHeight()+30); drawable.draw(canvas); File storagePath = new File(Environment. getExternalStorageDirectory() + "/PhotoAR/"); storagePath.mkdirs(); File myImage = new File(storagePath, Long.toString(System.currentTimeMillis()) + ".jpg"); try { FileOutputStream out = new FileOutputStream(myImage); newImage.compress(Bitmap.CompressFormat.JPEG, 80, out); out.flush(); out.close(); } catch(FileNotFoundException e) { Log.d("In Saving File", e + ""); } catch(IOException e) { Log.d("In Saving File", e + ""); } camera.startPreview(); newImage.recycle(); newImage = null; Intent intent = new Intent(); intent.setAction(Intent.ACTION_VIEW); intent.setDataAndType(Uri.parse("file://" + myImage.getAbsolutePath()), "image/*"); startActivity(intent); } };
output of this
-- Like Vigo,I made a powershell function to recode video with some extra parameters. It basically makes a get-childitem in the directory and feeds every occurrence it finds to a foreach loop. This worked well as long as I have default values inside my function which gets fed to the ffmpeg string in the loop in case I do not provide anything on the commandline (like number of passes, audio quality etc.). Now I wanted to integrate the option to use the -vf ffmpeg filter option. My problem there is, that I usualy dont need that, so there is no sane default option I could use, so I can not have something like -vf $filteroption in my command line. So I am trying to figure out how to get that "-vf" inside the variable without powershell or ffmpeg screwing me over, because atm I get either the error of a missing - in what ffmpeg sees (I guess powershell parses this away) and when I \ escape the - I see it now in the ffmpeg line, but ffmpeg does not recognize it as single parameter.
examples which work:
&$encoder -hide_banner -i $i -c:v libvpx-vp9 -b:v 0 -crf $quality -tile-columns 6 -tile-rows 2 -threads 8 -speed 2 -frame-parallel 0 -row-mt 1 -c:a libopus -b:a $bitrate -af aformat=channel_layouts=$audio -c:s copy -auto-alt-ref 1 -lag-in-frames 25 -y $outfile;
here I provide $quality, $audio etc. with powershell parameters to the function like -quality 31 -audio stereo and it all works.
But now I need to get something like "-vf scale=1920:-1
" or "" inside that line and that does not work with something like just this:
&$encoder -hide_banner -i $i -c:v libvpx-vp9 -b:v 0 -crf $quality -tile-columns 6 -tile-rows 2 -threads 8 -speed 2 -frame-parallel 0 -row-mt 1 -c:a libopus -b:a $bitrate -af aformat=channel_layouts=$audio -c:s copy -auto-alt-ref 1 -lag-in-frames 25 -y $extra $outfile;
when I call the function with: "RecodeVP9 -extra -vf scale=1920:-1
" powershell takes away the -, if I try it with escaping the - with - ffmpeg whines about it saying that "Unable to find a suitable output format for '-vf'". I also tried "" and "-" with similiar results. So it seems that either powershell screws me over or ffmpeg.
So to sum it up:I need a way to get extra ffmpeg arguments WITH the parameter name itself from the powershell command line into my powershell function (like -vf scale=1920:-1
).
I'm using remuxing.c
I'm trying to add segment option to code like this:
AVDictionary* headerOptions = NULL;
av_dict_set(&headerOptions, "segment_time", "10", 0);
avformat_write_header(&ofmt_ctx, &headerOptions);
it's not working
-- remuxing.cI'm using remuxing.c
I'm trying to add segment option to code like this:
AVDictionary* headerOptions = NULL;
av_dict_set(&headerOptions, "segment_time", "10", 0);
avformat_write_header(&ofmt_ctx, &headerOptions);
it's not working
-- remuxing.cI can't use DXVA2 hardware acceleration for decoding of HEVC video with ffmpeg. DXVA2 for H.264 works fine.
I compiled an official example hw_decode.c from ffmpeg sources:
https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/hw_decode.c
When I call av_send_packet() it performs a callback assigned in AVCodecContext->get_format and returns only AV_PIX_FMT_YUV420P for HEVC video instead of AV_PIX_FMT_DXVA2_VLD for all H.264 videos. So HW decoding doesn't work.
Software decoding of HEVC works without problems.
MPC-HC plays fine HEVC video with DXVA2 (CPU loading is low and Task Manager shows work of Video decoder in GPU details). My video card is Geforce 1060.
-- https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/hw_decode.cI'm using moviepy to import some videos, but the videos that should be in portrait mode are imported in landscape. I need to check whether the rotation has been changed, and if it has, rotate it back.
Is this functionality built into moviepy? If not, how else can I check it?
I am using ffmpeg to convert amr to wav and wav to amr.Its successfully converting amr to wav but not viceversa. As ffmpeg is supporting amr encoder decoder, its giving error.
ffmpeg -i testwav.wav audio.amr
Error while opening encoder for output stream #0.0 - maybe incorrect parameters such as bit_rate, rate, width or height
I'm trying to write a Python program that uses MoviePy on Mac OS 10.11.16 to convert an MP4 file to GIF. I use:
import moviepy.editor as mp
and I get an error saying I need to call imageio.plugins.ffmpeg.download()
so I can download ffmpeg. I use:
import imageio
imageio.plugins.ffmpeg.download()
which gives me the following error:
Imageio: 'ffmpeg.osx' was not found on your computer; downloading it now.
Error while fetching file: .
Error while fetching file: .
Error while fetching file: .
Error while fetching file: .
Traceback (most recent call last): File "", line 1, in imageio.plugins.ffmpeg.download() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/imageio/plugins/ffmpeg.py", line 55, in download get_remote_file('ffmpeg/' + FNAME_PER_PLATFORM[plat]) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/imageio/core/fetching.py", line 121, in get_remote_file _fetch_file(url, filename) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/imageio/core/fetching.py", line 177, in _fetch_file os.path.basename(file_name))
OSError: Unable to download 'ffmpeg.osx'. Perhaps there is a no internet connection? If there is, please report this problem.
I definitely have an internet connection. I found this link, and tried installing with Homebrew and Static builds, but neither have worked. It seems like compiling it myself would be a little too advanced for me (I've only briefly looked into it). I used imageio.plugins.ffmpeg.download()
on IDLE. I read something about using PyCharm to run the MoviePy code, but I get the same initial error. ffmpeg is currently in my /usr/local/bin
folder. Any suggestions are welcome. Thank for your help.
Edit: I'm using Python 3.6.1
i am developing android application in which i need to play AAC live audio stream coming from Red5 server.
I have successfully decoded the audio stream by using javacv-ffmpeg.
But my problem is how to play the audio from decoded samples.
I have tried by following way
int len = avcodec.avcodec_decode_audio4( audio_c, samples_frame, got_frame, pkt2);
if (len <= 0){ this.pkt2.size(0);
} else { if (this.got_frame[0] != 0) { long pts = avutil.av_frame_get_best_effort_timestamp(samples_frame); int sample_format = samples_frame.format(); int planes = avutil.av_sample_fmt_is_planar(sample_format) != 0 ? samples_frame.channels() : 1; int data_size = avutil.av_samples_get_buffer_size((IntPointer)null, audio_c.channels(), samples_frame.nb_samples(), audio_c.sample_fmt(), 1) / planes; if ((samples_buf == null) || (samples_buf.length != planes)) { samples_ptr = new BytePointer[planes]; samples_buf = new Buffer[planes]; } BytePointer ptemp = samples_frame.data(0); BytePointer[] temp_ptr = new BytePointer[1]; temp_ptr[0] = ptemp.capacity(sample_size); ByteBuffer btemp = ptemp.asBuffer(); byte[] buftemp = new byte[sample_size]; btemp.get(buftemp, 0, buftemp.length); play the buftemp[] with audiotrack..... }
But only noise is heard from speakers, is there any processing is need to be done on AVFrame
we get from decode_audio4(...)
.
The Incoming audio stream is correctly encoded with AAC codec.
Any help, suggestion appreciated.
Thanks in advance.
Have installed FFMPEG via conda using conda install -c conda-forge ffmpeg
and am getting RuntimeError: Requested MovieWriter (ffmpeg) not available
any ideas? a direct Import ffmpeg yields not found as well conda list
shows ffmpeg 4.0 installed using Ubuntu 16.04 current conda in an env..
I tried to build Opencv3.1 at Ubuntu16.04 with ffmpeg. I follow the instructions how to install ffmpeg at here.
According to this discussion ,I checked.
>> mediainfo ~/Desktop/grb_2.mp4 General
Complete name : /home/nyan/Desktop/grb_2.mp4
Format : MPEG-4
Format profile : Base Media
Codec ID : isom (isom/iso2/avc1/mp41)
File size : 445 KiB
Duration : 27s 862ms
Overall bit rate : 131 Kbps
Encoded date : UTC 1904-01-01 00:00:00
Tagged date : UTC 1904-01-01 00:00:00
Writing application : Lavf57.66.105 Video
ID : 1
Format : AVC
Format/Info : Advanced Video Codec
Format profile : High@L3
Format settings, CABAC : Yes
Format settings, ReFrames : 4 frames
Codec ID : avc1
Codec ID/Info : Advanced Video Coding
Duration : 27s 862ms
Bit rate : 128 Kbps
Width : 720 pixels
Height : 480 pixels
Display aspect ratio : 4:3
Original display aspect ratio : 4:3
Frame rate mode : Constant
Frame rate : 29.970 (30000/1001) fps
Standard : NTSC
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 8 bits
Scan type : Progressive
Bits/(Pixel*Frame) : 0.012
Stream size : 435 KiB (98%)
Writing library : x264 core 148
Encoding settings : cabac=1 / ref=3 / deblock=1:0:0 / analyse=0x3:0x113 / me=hex / subme=7 / psy=1 / psy_rd=1.00:0.00 / mixed_ref=1 / me_range=16 / chroma_me=1 / trellis=1 / 8x8dct=1 / cqm=0 / deadzone=21,11 / fast_pskip=1 / chroma_qp_offset=-2 / threads=12 / lookahead_threads=2 / sliced_threads=0 / nr=0 / decimate=1 / interlaced=0 / bluray_compat=0 / constrained_intra=0 / bframes=3 / b_pyramid=2 / b_adapt=1 / b_bias=0 / direct=1 / weightb=1 / open_gop=0 / weightp=2 / keyint=250 / keyint_min=25 / scenecut=40 / intra_refresh=0 / rc_lookahead=40 / rc=crf / mbtree=1 / crf=23.0 / qcomp=0.60 / qpmin=0 / qpmax=69 / qpstep=4 / ip_ratio=1.40 / aq=1:1.00
Encoded date : UTC 1904-01-01 00:00:00
Tagged date : UTC 1904-01-01 00:00:00 ffmpeg -codecs | grep -i avc
ffmpeg version N-90982-gb995ec0 Copyright (c) 2000-2018 the FFmpeg developers built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.9) 20160609 configuration: --prefix=/home/nyan/ffmpeg_build --enable-shared --extra-cflags=-I/home/nyan/ffmpeg_build/include --extra-ldflags=-L/home/nyan/ffmpeg_build/lib --extra-libs='-lpthread -lm' --bindir=/home/nyan/bin --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-nonfree libavutil 56. 18.100 / 56. 18.100 libavcodec 58. 19.100 / 58. 19.100 libavformat 58. 13.101 / 58. 13.101 libavdevice 58. 4.100 / 58. 4.100 libavfilter 7. 21.100 / 7. 21.100 libswscale 5. 2.100 / 5. 2.100 libswresample 3. 2.100 / 3. 2.100 libpostproc 55. 2.100 / 55. 2.100 DEV.LS h264 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (decoders: h264 h264_v4l2m2m ) (encoders: libx264 libx264rgb h264_v4l2m2m h264_vaapi ) D.A.L. avc On2 Audio for Video Codec (decoders: on2avc )
So my ffmpeg installation is ok.
When I cmake to my Opencv, I have set up to ffmpeg as
But when I make to compile, I have error as
make[2]: *** No rule to make target '/home/nyan/ffmpeg_build/lib/libavresample.a', needed by 'lib/libopencv_videoio.so.3.1.0'. Stop.
CMakeFiles/Makefile2:8709: recipe for target 'modules/videoio/CMakeFiles/opencv_videoio.dir/all' failed
make[1]: *** [modules/videoio/CMakeFiles/opencv_videoio.dir/all] Error 2
Makefile:160: recipe for target 'all' failed
make: *** [all] Error 2
How can I fix the problem?
I install libavresample manually and
sudo apt-get install libavresample-dev
[sudo] password for nyan: Reading package lists... Done
Building dependency tree Reading state information... Done
libavresample-dev is already the newest version (7:2.8.14-0ubuntu0.16.04.1).
It is newest version.
EDIT:
Actually libavresample at ffmpeg is deprecated already.
So I tried to install Opencv3.4.1 higher version.
Compile, build and installation are fine.
But VideoCapture cap(0);
does not read from the device.
So I checked ffmpeg at Opencv3.4.1 installation as
python -c "import cv2; print(cv2.getBuildInformation())" | grep -i ffmpeg
FFMMPEG IS NO. So build with ffmpeg is not successful even though there was no error at build for Opencv3.4.1.
-- here,avcodec/nvdec_hevc: fix scaling lists The main issue here was the use of [i] instead of [i * 3] for the 32x32 matrix. As part of fixing this, I changed the code to match that used in vdpau_hevc, which I spent a lot of time verifying. I also changed to calculating NumPocTotalCurr using the existing helper, which is what vdpau does. Signed-off-by: Timo Rothenpieler
I want to add a 5.1 .flac audio track to a .ts file that already has three audio tracks. I tried with tsMuxer and ffmpeg with unsuccessful results. In tsMuxeR the .flac track is not recognized and in ffmpeg everything seems to work fine until the very last moment when I check the file and the .flac audio track is not included in the "output.ts". The .flac track is about 3GB and its lenght is around two and a half hours.
Thank you so much.
I am writing my first program using the FFMPEG
libraries, unfortunately it's not a simple one.
What I need is to:
For now I am playing with the ffmpeg.exe
tool trying achieve this functionality. The command I have looks like this:
.\ffmpeg.exe -threads auto -y -i input0 -i input1 \ -filter_complex "[0:v]scale=1920x1080[v0];[1:v]scale=480x270[v1];[v0][v1]overlay=1440:810[v2]" \ -map [v2] -map 0:a -c:v libx264 -preset ultrafast -c:a copy output.mp4
When input0
and input1
are files the resulting output is correct, on the other hand, when the inputs are udp streams the resulting output is not correct, the video freezes most of the time.
The file inputs are generated from the udp streams, using the following command:
.\ffmpeg.exe -threads auto -y -i "udp://@ip:port" -c copy -f mpegts input1.mpg
Question 1
.
Why is the above command not producing good ouput for udp streams? What are the differences between the original stream and the dump of that stream for ffmpeg.exe
.
Question 2
.
Is there some argument/s that can fix the command?
Question 3
.
What kind of logic/algorithm is needed to correctly overlay two network streams.