機能追加
  • UMxx: QuickTime 版のエンコーダを追加した。
性能向上
  • UMxx: 全体的に高速化した。
バグ修正
  • ULxx: エンコーダに誤った設定をしてもエラーを返さなかった。

readme 日本語 英語 / ライセンス (GPLv2) 日本語 英語
バイナリ Windows (exe) Mac (zip) / ソース / GitHub

19.0.1 のリリースの時に QuickTime エンコーダがうまく動かないと書きましたが、もうちょっと調べてみたら UMxx 固有の問題ではなかったようで、 UMxx だけリリースしない理由がなくなったので追加しました。動かない理由は調査中です。

高速化の成果ですが、測定パターンにもよりますが 10%~20% 程度です。今は力尽きたのでグラフは省略します。測定結果を見る限りあと15%ぐらいは高速化できそうですが、そのためには読みづらいコードを書く必要があると思われるのでやめておきます。

Trackback

16 comments untill now

  1. Fernando vidigal @ 2018-01-23 07:07

    Hi,

    Any chance that you could add in the near future support for more 10+ bit depth formats like P210 and v410 for instance?
    Now that VirtualDub FilterMod as included support for more 10/16 bit formats like p010,P016,p210,P216, v410 and more it was quite important to have support from a fast codec like utvideo that could add to a more efficient ecosystem providing a better and more strong support to 10 bit formats now easily provided by a lot of capture cards.
    Thanks and I do hope that adding support to more 10 bit formats (not only v210) is part of your future strategy.

  2. 梅澤 威志 @ 2018-01-24 00:20

    To add support of input/output from/to 10bit+ formats (like P210, YUV422P10, etc), there should be:
    – A free (that is, open source) NLE software that pass frames with the formats to/from codecs.
    and
    – A free software framework that can generate test video clips with the formats. (see https://github.com/umezawatakeshi/utvideo-testclip)
    and
    – Of course, my passion and your passion.

    For 8bit formats, I use VirtualDub (and its forks) for first purpose and use VirtualDub and AviSynth for second purpose.

    If you know such software, please let me know.

    In addition, please see the discussion in https://github.com/umezawatakeshi/utvideo/issues/16

  3. Fernando vidigal @ 2018-01-24 07:51

    Thanks for the quick answer.

    About the resources needed
    1- A free (that is, open source) NLE software that pass frames with the formats to/from codecs.

    I can be wrong but I thought that VirtualDub FilterMod from version 18 update 2 (build 40879) do provide support for formats: P010, P210, P016, P216, v410 , y410 (input/output) and currently as fixed some problems within the P210 format that was not working correctly. I can for instance output P210 from my capture card and compress on the fly during capture P210->YUV422P16 to several formats like FFmpeg/FFv1 10 bit, FFmpeg /huufyuv variant 10 bit, x264 and x265 10 bit and vice versa from 10 bit ffv1, Huufyuv, x264,x265 etc to raw YUV uncompressed video v210 or 422 YCbCr 16 bit. The same applies to P010, V410 and Y410 using intermediate 16 bit versions to any codec currently supported and vice versa. I don’t know if this it’s enough.

    2- A free software framework that can generate test video clips with the formats

    I don’t know exactly what you do need for this but if you do need video clips with different W x H and formats/codecs / containers I will be more than happy to try to provide all the clips that you eventually may need , please contact me directly and tell me what test clips you do need . I will be ready to test also any beta version you may provide. If I can help in any other way please do tell me I will try my best even if i am only a user not a dev.

    I have helped the VDFM DEV( shekh ) to test the P210 support now it´s working , but output Y3[10][10] to file is still yet not supported I think.

  4. 梅澤 威志 @ 2018-01-25 14:53

    > 1- A free (that is, open source) NLE software that pass frames with the formats to/from codecs.
    > I thought that VirtualDub FilterMod from version 18 update 2 (build 40879) do provide support
    That’s a good news. I will look into it.

    > 2- A free software framework that can generate test video clips with the formats
    > I don’t know exactly what you do need for this
    Oops, sorry.
    The framework must have following features:
    – Can create clips with exact pixel values from image file sequence (like PNG). Image files may be generated programatically by other framework.
    – Can create clips with exact pixel values programatically (by using SetPixel, for example) — each pixel need 10bit exact value, for example, not 16bit, and vice versa.
    AviSynth (or AviSynthPlus) + VirtualDub FilterMod work fine for 8bit formats. But not sufficient for 10bit+ formats (when I examined before).

    Test clips for unit test may depends on internal structure of codecs, in order to test corner cases. The number of test clips will be vast and clips should be generated systematically.
    Thanks for the offer, but I think that getting you to create test clips is unrealistic.

  5. Fernando vidigal @ 2018-01-26 10:09

    Well I see, the second requirement could be more problematic, however

    from your previous contact with VDFM Dev

    “would be better if output Y3[10][10] to file is supported
    What is possible use for such file? Why do you want it?
    However it is not a big deal, I can put it in “other formats” list in the “pixel format” dialog.
    Output P210 to codec: do you want it? I suspended implementation because there was nothing to gain”

    It seems Shekh has fixed both, in this case I misled you as it´s possible to convert from lossless 10+ bit formats to 10 bit or 16 uncompressed not only 16 bit

    Formats supported
    https://we.tl/ZuMH5ixVSr

    Perhaps this is enough for you.

    If not I think that if you let him know what exactly yours requirements are to be able to support 10+ bit formats, with your codec that he will be willing to help (if there is not too much effort involved of course). I think it can be definitely beneficial in first place obviously for end users but also for both of you as a flexible and strong application as a base for captures and tests with your codec is certainly interesting for you and the VDFM application itself and his dev as to gain with a fast codec support that do allow lossless live captures at high bit depths , high resolutions or high frame rates (where the files are huge and the gain could be tremendous ) this could create a momentum and help to definitely launch a new ecosystem able to support high bit depths formats very much needed at the present time.

  6. […] いくつか圧縮方法を追加する、梅澤さんのUt Videoコーデックを入れてみたりしたのですが、正直どれを選べばいいのかわかりません。 そもそもMacで編集するので、Avi形式から、さらに変 […]

  7. 大野守 @ 2018-03-18 20:54

    現行バージョンでYV420 BT.XX VCMを選択してもAvisynthでwarningが出ます。infoで見るとYV16 になってしまっていますが、これはこういうものなんでしょうか?

  8. 梅澤 威志 @ 2018-03-19 23:46

    AviSynth の AviSource フィルタに YV420 BT.XX な映像を与え、かつ pixel_type を指定しない場合に YV16 で出てくる、という話であれば、 AviSource フィルタの仕様です。

    pixel_type を指定しない場合、YV24, YV16, YV12, … の順番に試してデコーダが最初に対応しているものが採用されます(この時、デコーダ側で設定している優先度は無視されます)。 UtVideo YV420 BT.XX VCM のデコーダは利便性のため YV12 だけでなく YV16 などの他のフォーマットでも出力できるようになっており、結果として YV16 で出力されます。

    もし YV12 で出力されることを期待するのであれば、 pixel_type = “YV12” と明示するか、 デコーダ側の優先度に従う DirectShowSource フィルタを使う必要があります。

    詳しい話は http://avisynth.nl/index.php/AviSource をどうぞ(英語

  9. Hello Takeshi San,
    At first please accept my apologies for using English (it is also not my native language) but being gajin it is pretty self explanatory – once again my apologies.

    Please allow me Thank You for Your work, it is highly appreciated.

    I have two questions:
    First related to discussion on https://forum.videohelp.com/threads/388402-Ffmpeg-progressive-to-intelaced about support for interlace also in Your codec. Are You able elaborate more on this? How interlace is supported within UT Video Suite?

    My second question – did You ever considered adding one intermediate block in You UT Video codec where RGB may be converted to reversible YCoCg (YCoCg-R) for example this https://www.microsoft.com/en-us/research/publication/ycocg-r-a-color-space-with-rgb-reversibility-and-low-dynamic-range/ . As YCoCg provide strong decorrelation chroma from luma yet it is low computationally demanding it may be perfect solution to increase coding gain for RGB without loosing RGB (data feed and available at output as RGB but natively stored as YCoCg).

    Once again Thank You for Your work.
    With Kind Regards

  10. 梅澤 威志 @ 2018-03-24 22:37

    No problem. English is the global language :-p

    The answer for first question:
    First, note that I don’t know about UtVideo implementation in FFmpeg.
    The original UtVideo uses “interlace flag” for following two purposes:
    – Do slightly different intra-frame prediction in “predict gradient” and “predict median” in order to achieve better compression for interlace video.
    – Do correct chroma sampling for interlace video while converting between internal YUV420 format and external RGB or YUV422 formats (in ULY0/ULH0)

    The answer for second question:
    I have heard about YCoCg and tested it several years before. The result is: YCoCg is not useful for UtVideo because it complicates compression process (due to additional data bits) while it improves compression ratio only little.

  11. Thank You for a quick reply.
    Starting from second question and Your reply – i assume case where lossy RGB to YCoCg transformation is used was never interesting for You? (i mean special case where at cost of loosing 1 bit we receive highly correlated color from luminance)?

    Going back to interlace – things not clear to me – are You able to follow changes in material on frame accuracy (like hybrid content mixed frames and fields – some encoders may produce such content based on motion analysis and as such at their output such mixed combination is present) and most important from overall discussion on link provided earlier – are You exporting (exposing) interlace signalization (so flags like progressive/interlaced and TFF/BFF) to be used by other applications – there is discussion on Blender NLE but also on other applications like ‘Mediainfo’ which seem to be not aware on interlaced status – i’m aware that Mediainfo may not use Your code but for example same code as ffmpeg). So question is: are UT Video is able to: not only properly deal with 4:2:0 4:2:2 conversion but also follow (track) and store and later recover + signal information on time domain so application using UT Video is aware of those mentioned flags (progressive/interlaced and TFF/BFF?

    Thank you in advance,
    With Kind regards

  12. 梅澤 威志 @ 2018-03-25 16:01

    I am not very interested in adding lossy variants. There are many good lossy codecs.

    The answer is “NO”.

    The “interlace flag” of UtVideo only indicates that the video is *compressed suitable for* interlaced. It does not indicate that the video is *actually* interlaced. UtVideo is not interested in whether the video is actually interlaced or not, as well as uncompressed video data itself do not contain interlace info.

  13. Thank You very much, Your answers was very helpful.
    Once again thank You for UT Video, Your hard work is very appreciated.

    Best regards

  14. Zouhair Benchchaoui @ 2020-04-02 10:32

    i’ve been trying to use this with adobe cc19 and i had no luck finding the MediaCoreQTCodecRules you’ve specified in the README file. is there any way to do so ?

  15. Zouhair Benchchaoui @ 2020-04-02 10:33

    i’m using macOS mojave by the way

  16. 梅澤 威志 @ 2020-04-02 19:45

    Unfortunately, macOS Mojave have dropped QuickTime support. So we can no longer use Mac version of UtVideo, which is based on QuickTime technology. Sorry. The version which have completely dropped QuickTime support is Catallina.

    I don’t have Mac hardware anymore. I’m not sure where the codec rules file exists on Mac.

Add your comment now