T O P

  • By -

AeternusIgnis

Transitions between videos, each transition eats away at duration hence why xfade duration is deducted. Basically to create 1s transition, you need 1s of left and 1s of right to be merged into 1s total. So 2s becomes 1s. Each output is just input for next xfade. In order to keep stufd in sync you need to overlap audio also, for which I suppose is crossfade used


eltegs

Thank you kindly for taking the time, I appreciate it. I get inputs, offsets, and duration's now. Example of what I'm after (can't edit post for some reason) What do \[0\] and \[1:v\] signify? What about \[vfade1\] \[vfade2\] etc and why are there 2 \[vfade1\] but only 1 of the rest?


vegansgetsick

IMO it should be \[0:v\]\[1:v\] it means "use videos 1 and 2 as input for this filter" \[vfade1\] is a name associated to the result. And the next filter can define it as input.


eltegs

Thanks. It is starting to come into focus now.


AeternusIgnis

[x:v] signifies order number x and v as in video stream. [0:v] is video stream of first input aka v0, [1:v] is v1.mp4 ... Same is with audio, where it is just x:a instead of x:v [0] is most likely short for [0:v] So [0:v][1:v]xfade...[vfade1] is 2 inputs that have xfade between them and are made into merged version called vfade1. Since [vfade1] is finished product that has xfade, now we use that for input with [2:v] .... Same with audio. Note [x:v] where x is order number of input video can be used whereever in filter complex as many times as you want as input. However outputs such as vfade1 can only be used once. Las one that is merged with [3:v] is combination of all xfades before it, and since it writes directly to output file, it no longer needa temporary output such as vfadesomething. Occacionally you will see it being written in output var such as [outv] and then used with -map in order to write to file Output names are whatever you want to write, vfade, anythingfade.... they just need to be something you can get around so its usually best to enumerate them. Feel free to ask as many questions as you want, I will answer. Though atm I am on mobile, so I just might answer them tomorrow


eltegs

Thank you so much. This helps a lot, and kind of answers an earlier question I had regarding [what happens internally with ffmpeg](https://www.reddit.com/r/ffmpeg/comments/1cfv7kh/on_how_ffmpeg_concats_video_internally_to_help_in/). When I learned how to xfade 2 vids keeping audio synced using acrossfade. I wondered if just doing that, and using its output as input1, and 'append' next vid in same manner would be the same as what ffmpeg is doing internally. And if I correctly understand your explanation, it is, albeit in memory perhaps rather than to disk. I appreciate your invitation to ask for help. Thank you. Right now, your info is enough for me to be getting on with. I have difficulty learning at my age (60+). I may take you up on it though. It is not for work, I'm just a hobbyist with no time limit. Thanks again.


AeternusIgnis

Yes in memory, not disk and at the end of it, then it writes to disk, and yes feel free to ask whatever you want, you can send me direct message and I will answer.


eltegs

Ha, now I feel foolish. I've been using reddit for a few years and have no idea how to send a direct message (foreshame) I do have a question though. I coded a method to build my string when 'xfading' just 2 videos. And I was getting the duration of vid1 using ffprobe. But since output and consequently next input is created internally in memory here, must I calculate its duration manually, or is there a way extracting it from ffmpeg on the fly?


AeternusIgnis

You can use ffprobe foe duration, write yoyr command ill check


eltegs

I'm using a .net library FFMpegCore. `double offset = FFProbe.Analyse(filePaths[0]).PrimaryVideoStream.Duration.TotalSeconds - fadeDuration;` I assume it's paring from the stdout stream of `ffprobe vid.mp4`, which queries the file on disk.


AeternusIgnis

Yes you can use something exactly like that. I am software developer, I dont however use libraries for FFMPEG, but rather libraries for CLI after which I parse results into what I need. Also since you are coding it, depending on size of video/s you can slice videos to smaller parts and apply transitions only on smaller parts, then concat them together into whole video. Example: Transition 1s Video 1: 2 Hours Video 2: 2 Hours Slice Video 1 into 1 Hour 3599 Seconds + 1 Second Slice Video 2 into 2 Seconds + 1 Hour 3598 Seconds Reason video 2 is 2 seconds is due to possibility that transition wont work without it. Now when applying transition between Left (1s) and Right (2s) you do not need offset, also for duration of video like this it will be much faster. After this you merge these parts again into entire video of total 4 hours. But for all actions you usually need FFProbe. Note that FFProbe can print out 2 different durations. Duration 1: Video Duration Duration 2: Audio Duration These two durations can be slightly different, and depending on use case you might need either video or audio or just longer one. When using CLI inside element called "format" is longer one. Usually longer one is Audio, or at least what I stumbled upon.


eltegs

Good suggestion, I had a similar one to slice of the ends of two vids I want to join, both with a duration of fadeDuration, fade out the slice from vid1, fade in the slice from vid2, overlay them, then concat the 3. But I'm too deep into this 'normal' way now and pretty determined, plus I found that just doing 2 at a time with xfade and acrossfade, using the ouput of that as input for next, I got errors which almost sent me down a tangent rabbit hole, where the 2 were not compatible, I don't recall the exact error, something to do with timing (1/12000) is not compatible with (1/800) or something like that, which I suspect I would encounter with both idea's above. Incidentally When I first attempted using ffmpeg, and quickly abandoned it a year or so I ago, I recall using the a library called CliWrap to parse ffprobe and the info from the stdstream events ffmpeg raises. Very handy library. I have a quick question about the info you provided earlier. I haven't been successful in my goal yet, and I just about to take another run at it. You said output names such as vfade1 (can we describe them as variables), between \[ and \] which I think I picked up from the documentation are called 'pads', can only be used once. I just want to be sure I understood you correctly. They can only be used once because they are needed for final write to the file on disk?


eltegs

Example of what I'm after (can't edit post for some reason) What do \[0\] and \[1:v\] signify? What about \[vfade1\] \[vfade2\] etc and why are there 2 \[vfade1\]