deepfacelab中文网

 找回密码
 立即注册(仅限QQ邮箱)
楼主: 天天好想你

万能模型的训练方式Liae结构适用吗?

[复制链接]

4

主题

1235

帖子

1万

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
10346
发表于 2022-1-11 03:24:47 | 显示全部楼层
学习了!
回复

使用道具 举报

10

主题

2836

帖子

1万

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
17198

万事如意节日勋章

发表于 2022-1-11 11:01:19 | 显示全部楼层
dsyrock 发表于 2022-1-10 22:53
我自己的经验,DF结构跟万能模型完全不搭,非常不适合。我自己实验的也好,滚石自己的杨幂模型也好,效果都 ...

搬运 翻译 辛苦
回复 支持 反对

使用道具 举报

2

主题

125

帖子

759

积分

禁止访问

积分
759
发表于 2022-1-11 12:06:35 | 显示全部楼层
提示: 作者被禁止或删除 内容自动屏蔽
回复

使用道具 举报

0

主题

370

帖子

6723

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
6723
发表于 2022-1-11 12:37:30 | 显示全部楼层
这问题有些深奥,可能要等资深大佬的回复了
回复 支持 反对

使用道具 举报

0

主题

21

帖子

172

积分

高级丹童

Rank: 2

积分
172
发表于 2022-1-19 19:56:52 | 显示全部楼层
看不懂帮顶一下。
回复 支持 反对

使用道具 举报

5

主题

622

帖子

3675

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
3675
QQ
发表于 2022-2-12 12:45:21 | 显示全部楼层
dsyrock 发表于 2022-1-10 22:53
我自己的经验,DF结构跟万能模型完全不搭,非常不适合。我自己实验的也好,滚石自己的杨幂模型也好,效果都 ...

求链接指路阿
我承诺本论坛资源绝不不用于不法益利,违法诈骗,非法传播,转卖等
本论坛内承诺只以交流技术为目的,拒绝非法内容.
回复 支持 反对

使用道具 举报

8

主题

400

帖子

3483

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
3483
发表于 2022-2-12 12:56:46 | 显示全部楼层

网站含有不适合公开的内容,就不贴链接了。你可以去作者的github页里找,都有贴出来
回复 支持 反对

使用道具 举报

1

主题

10

帖子

90

积分

高级丹童

Rank: 2

积分
90
发表于 2022-3-31 11:18:11 | 显示全部楼层

老哥你有找到链接么。。。
回复 支持 反对

使用道具 举报

18

主题

429

帖子

2204

积分

初级丹圣

Rank: 8Rank: 8

积分
2204
发表于 2022-3-31 11:46:01 | 显示全部楼层
每一百万代送不像删ab,后续好像还得开hsv比较麻烦
回复 支持 反对

使用道具 举报

3

主题

93

帖子

1057

积分

初级丹圣

Rank: 8Rank: 8

积分
1057
发表于 2022-6-27 22:00:16 | 显示全部楼层
我在mrDeepFakes上找到答案了。贴在这里。大家有看不懂的可以问我,我在DFL方面是小白,但是因为在美国留过学,所以英文还可以。
注意:RTM其实就是咱们论坛常说的万能丹,ReadyToMerge字面意思是“直接合成”的模型,意思是不用训练。

10.3 RTM Training Workflow:

With introduction of DeepFaceLive (DFLive) a new training workflow has been established, contrary to what some users think this isn't a new training method and does not differ significantly from regular training and this training method has been employed by some people in one way or another, you may have yourself create one by accident without even realizing it.

RTM models (ReadyToMerge) are created by training an SRC set of the person we want to swap against large and varied DST set containing random faces of many people which covers all possible angles, expressions and lighting conditions. The SRC set must also have large variety of faces. The goal of RTM model training is to create a model that can apply our SRC face to any video, primarly for use with DeepFaceLive but also to speed up training process within DeepFaceLab 2.0 by creating a base model that can very quickly adapt to new target videos in less time compared to training a model from scratch.

The recommended type of models for use with RTM workklow are SAEHD LIAE models, LIAE-UD or LIAE-UDT thanks to their superior color and lighting matching capabilities as well as being able to adapt better to different face shapes than DF architecture.
AMP models can also be used to create RTM models, although they work a bit differently and as I lack know-how to explain AMP workflow yet I will only focus on LIAE RTM model training in this part of the guide.

1. Start by preparing SRC set: make sure you cover all possible angles, each with as many different lighting conditions and expressions, the better the coverage of different possible faces, the better results will be.

2. Prepare a DST set by collecting many random faces: this dataset must also have as much variety as possible, this dataset can be truly random, consisting of both masculine and femine faces of all sorts of skin colors or it can be specific to for example black masucline faces or feminine asian faces if that's the type of target face you plan on primarly use the model with, the more variety and more faces in the set the longer it will take to train a model.
ALTERNATIVELY - USE RTM WF dataset from iperov: https://tinyurl.com/2p9cvt25
If the link is dead go to https://github.com/iperov/DeepFaceLab and find torrent/magnet link to DFL builds as they contain the RTM WF dataset along them.

3. Apply XSeg masks to both datasets: this will ensure model correctly trains and as with any other training is require in order to create WF model and while it's optional for FF models it's still recommended to apply XSeg mask of the correct type to both datasets, make sure you use the same XSeg model for both datasets.

4. Pretrain a new model or use one that you already pretrained: pretrain for at least 600k-1kk iterations.

5. Start training on your SRC and random DST. If you are using an existing RTM model that you or someone else trained as your base model instead of pretrained model delete inter_ab file from the "model" folder before proceeding to train it:

NEW WORKFLOW:

Pretrain or download a WF LIAE-UDT SAEHD model (due to better SRC likenes -UDT variant can achieve, if you can't run LIAE-UDT model, pretrain or use an existing pretrained LIAE-UD model).

Resolution: 224 or higher. Face Type: WF.
Dims: AE: 512, E: 64, D: 64, D Masks: 32
Settings: EMP Enabled, Blur Out Mask Enabled, UY Enabled, LRD Enabled, BS:8 (if you can't run your model with high enough BS follow standard procedures to reduce model parameters: archi, dims, optimizer, optimizer/lrd on cpu).
Others options should be left at default values. Optionally use HSV at power 0.1 and CT mode that works best for you, usually RCT.

Make a backup before every stage or enable auto backups.

1. Train 2.000.000 iters with RW enabled and delete inter_AB.npy every 500k iters (save and stop model training, delete the file and resume training)
2. After deleting inter_AB 4th time train extra 500k with RW enabled.
3. If swapped face looks more like DST, delete inter_AB and repeat step 2.
4. Disable RW and train for additional 500k iters.
5. Enable GAN at power 0.1 with GAN_Dims:32 and Patch Size being 1/8th of your model resolution for 800.000k iters.

ALTERNATIVE EXPERIMENTAL WORKFLOW (NOT TESTED):

Follow the same steps as in the new workflow except do not train with EMP and LRD enabled all the time, instead near the end of step 2/3 run the model a bit with RW Enabled and LRD enabled until loss stops decreasing, then move on to step 4 by disabling both RW and LRD, after 400-500k run EMP for about 100-200k and then disable EMP and enable LRD for yet another 100-200k. UY can be left enabled all the time or disabled and enabled halfway through steps 2/3 and later halfway through step 4.
回复 支持 2 反对 0

使用道具 举报

QQ|Archiver|手机版|deepfacelab中文网 |网站地图

GMT+8, 2024-9-20 23:38 , Processed in 0.104981 second(s), 8 queries , Redis On.

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

快速回复 返回顶部 返回列表