deepfacelab中文网

 找回密码
 立即注册(仅限QQ邮箱)
查看: 1195|回复: 5

【搬运教程】各显卡对应的模型参数表

[复制链接]

68

主题

469

帖子

1万

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
15907

万事如意节日勋章

 楼主| 发表于 2024-1-19 22:33:01 | 显示全部楼层 |阅读模式
星级打分
  • 1
  • 2
  • 3
  • 4
  • 5
平均分:NAN  参与人数:0  我的评分:未评
Adabelief Enabled
GPU​
VRAM​
CPU​
RAM​
Architecture​
Resolution​
AE Dims​
E Dims​
D Dims​
D Mask Dims​
Batch Size​
Iteration Times​
GPU Optimizer​
GTX 1060​
6​
i5-4690K​
16​
LIAE-UD​
320​
264​
72​
72​
24​
5​
6500​
FALSE​
GTX 1080 Ti​
11​
i7-4770K​
16​
DF-UD​
320​
320​
72​
72​
16​
8​
700​
TRUE​
GTX Titan X​
12​
i7-2600K​
24​
LIAE-UDT​
224​
256​
64​
64​
16​
6​
1035​
TRUE​
RTX 2080​
8​
i7-8700K​
32​
LIAE-UD​
256​
256​
64​
64​
22​
8​
500​
FALSE​
RTX 3050​
8​
i5-4670​
8​
DF-UD​
320​
256​
64​
64​
22​
4​
680​
TRUE​
RTX 3060​
12​
i5-8400​
32​
DF-UD​
320​
320​
96​
96​
32​
7​
1350​
TRUE​
RTX 3060​
12​
R5-5600X​
32​
DF-UD​
384​
256​
64​
64​
22​
8​
1140​
TRUE​
RTX 3060​
12​
R5-3600​
32​
DF-UD​
384​
256​
64​
64​
22​
8​
1130​
TRUE​
RTX 3060​
12​
i5-8400​
32​
DF-UD​
320​
360​
90​
90​
22​
7​
1350​
TRUE​
RTX 3060​
12​
i5-8400​
32​
DF-UD​
320​
288​
80​
80​
22​
9​
1350​
TRUE​
RTX 3060​
12​
i5-8400​
32​
DF-UDT​
320​
320​
88​
88​
22​
7​
1350​
TRUE​
RTX 3060​
12​
i5-8400​
32​
DF-UDT​
256​
300​
80​
64​
22​
17​
1350​
TRUE​
RTX 3060​
12​
i5-8400​
32​
DF-UDT​
288​
300​
80​
80​
22​
10​
1350​
TRUE​
RTX 3060​
12​
i5-8400​
32​
DF-UDT​
320​
360​
90​
90​
22​
5​
1350​
TRUE​
RTX 3060​
12​
i5-8400​
32​
LIAE-UDT​
320​
256​
72​
72​
32​
9​
1350​
TRUE​
RTX 3060​
12​
i5-8400​
32​
LIAE-UD​
320​
288​
72​
72​
22​
7​
1350​
TRUE​
RTX 3060​
12​
i5-8400​
32​
LIAE-UD​
256​
256​
64​
64​
22​
18​
1350​
TRUE​
RTX 3060​
12​
i5-8400​
32​
LIAE-UD​
256​
256​
80​
80​
22​
13​
1350​
TRUE​
RTX 3060​
12​
i5-8400​
32​
LIAE-UD​
320​
256​
64​
64​
22​
10​
1350​
TRUE​
RTX 3060​
12​
i5-8400​
32​
LIAE-UDT​
320​
320​
88​
88​
22​
7​
1350​
TRUE​
RTX 3090​
24​
R9-3900X​
32​
DF-UD​
384​
512​
112​
112​
16​
8​
1013​
TRUE​
RTX 3090​
24​
R3-3900X​
32​
DF-UD​
320​
512​
112​
112​
16​
16​
1074​
TRUE​
RTX 3090​
24​
i7-5820K​
48​
DF-UD​
416​
416​
104​
104​
26​
8​
1170​
TRUE​
RTX 3090​
24​
R9-3900X​
32​
LIAE-UDT​
224​
512​
64​
64​
16​
48​
998​
TRUE​
RTX 3090​
24​
R9-3900X​
32​
LIAE-UDT​
288​
352​
128​
128​
16​
16​
1320​
TRUE​
Tesla V100​
16​
Colab​
25​
DF-UD​
384​
352​
88​
88​
16​
8​
1000​
TRUE​
Tesla V100​
16​
Colab​
25​
DF-UD​
320​
416​
104​
104​
16​
8​
850​
TRUE​
Tesla V100​
16​
Colab​
25​
DF-UD​
384​
320​
80​
80​
22​
8​
900​
TRUE​
Tesla V100​
16​
Colab​
25​
DF-UD​
448​
256​
64​
64​
22​
8​
950​
TRUE​

Below are settings with no adabelief, mostly older results with base archis, consider these "legacy" and mostly irrelevant, do not train with adabelief disabled and keep in mind that without -D flag models are much heavier to train (achieving higher resolution is higher, but you could get better quality with something like DF-UT or LIAE-UT over -UD/UDT variants, but it will use a lot of VRAM, most people use -UD/UDT variants these days.
Adabelief Disabled
GPU​
VRAM​
CPU​
RAM​
Architecture​
Resolution​
AE Dims​
E Dims​
D Dims​
D Mask Dims​
Batch Size​
Iteration Times​
GPU Optimizer​
GTX 750 Ti​
2​
i5-4690K​
32​
LIAE​
112​
256​
64​
64​
22​
4​
1450​
FALSE​
GTX 970​
4​
i7-2600K​
12​
DF​
96​
256​
64​
64​
22​
4​
700​
TRUE​
GTX 970​
4​
i7-2600K​
12​
DF​
96​
256​
64​
64​
22​
4​
700​
TRUE​
GTX 1050 Ti​
4​
i5-3570K​
12​
DF​
128​
192​
48​
48​
36​
2​
645​
TRUE​
GTX 1050 Ti​
4​
i5-3570K​
12​
DF​
128​
192​
48​
48​
36​
2​
645​
FALSE​
GTX 1050 Ti​
4​
E5-1620​
16​
LIAE​
128​
128​
80​
48​
16​
4​
520​
TRUE​
GTX 1060​
6​
i5-4670K​
16​
DF​
192​
256​
64​
64​
22​
7​
1200​
FALSE​
GTX 1060​
6​
i5-4590​
16​
DF​
192​
256​
64​
64​
22​
6​
1400​
FALSE​
GTX 1060​
6​
i5-4670K​
16​
DF​
128​
450​
64​
64​
22​
10​
750​
TRUE​
GTX 1060​
6​
i7-6700HQ​
16​
DF-UD​
256​
320​
96​
96​
22​
4​
1800​
TRUE​
GTX 1650​
4​
i5-9300H​
16​
DF​
128​
256​
64​
64​
22​
6​
780​
TRUE​
GTX 1650​
4​
i5-9300H​
16​
DF​
128​
256​
64​
64​
22​
8​
960​
FALSE​
GTX 1660 Ti​
6​
i7-9700​
16​
DF​
192​
256​
64​
48​
22​
8​
676​
TRUE​
GTX 1070​
8​
i7-7700HQ​
16​
LIAE-UD​
224​
288​
96​
96​
16​
4​
800​
TRUE​
GTX 1070 Ti​
8​
R5-3600​
16​
DF​
192​
256​
64​
64​
22​
10​
1030​
FALSE​
GTX 1080​
8​
i7-8700K​
32​
DF​
192​
256​
64​
64​
22​
8​
780​
TRUE​
GTX 1080 Ti​
11​
W3680​
12​
DF-UD​
288​
256​
80​
80​
20​
12​
1205​
TRUE​
GTX 1080 Ti​
11​
W3680​
12​
DF-UD​
288​
256​
80​
80​
20​
4​
484​
TRUE​
GTX 1080 Ti​
11​
W3680​
12​
DF-UD​
288​
256​
80​
80​
20​
6​
719​
TRUE​
GTX 1080 Ti​
11​
W3680​
12​
DF-UD​
288​
256​
80​
80​
20​
8​
850​
TRUE​
GTX 1080 Ti​
11​
W3680​
12​
DF-UD​
288​
256​
80​
80​
20​
8​
862​
TRUE​
GTX 1080 Ti​
11​
i5-4590​
16​
LIAE​
192​
256​
64​
64​
22​
8​
670​
TRUE​
GTX 1080 Ti​
11​
i5-4590​
16​
LIAE​
192​
256​
64​
64​
22​
12​
900​
TRUE​
Quadro M2200​
4​
E3-1535M v6​
32​
DF​
128​
512​
64​
48​
16​
4​
921​
TRUE​
RTX 2060​
6​
i5-2500K​
8​
DF​
128​
256​
64​
64​
22​
14​
600​
TRUE​
RTX 2060​
6​
i5-2500K​
8​
DF​
160​
256​
64​
64​
22​
6​
2500​
FALSE​
RTX 2060​
6​
R5-2600​
16​
LIAE​
256​
256​
64​
64​
22​
2​
1700​
FALSE​
RTX 2060 S​
8​
R5-3500​
16​
DF-UD​
256​
256​
64​
64​
22​
14​
800​
TRUE​
RTX 2070​
8​
R7-3800X​
16​
DF​
192​
256​
64​
64​
32​
8​
1100​
FALSE​
RTX 2070​
8​
i7-8700​
16​
DF​
144​
256​
64​
64​
22​
8​
400​
TRUE​
RTX 2070 S​
8​
R5-3600​
16​
DF​
192​
256​
64​
64​
22​
5​
600​
FALSE​
RTX 2080​
8​
i7-8700​
16​
DF​
224​
512​
80​
80​
22​
2​
406​
TRUE​
RTX 2080​
8​
i7-8700​
16​
DF​
192​
512​
64​
64​
22​
7​
570​
TRUE​
RTX 2080​
8​
i7-8700​
16​
DF​
192​
512​
80​
80​
26​
3​
570​
TRUE​
RTX 2080​
8​
i7-8700​
16​
DF​
224​
512​
64​
64​
22​
5​
580​
TRUE​
RTX 2080​
8​
R7-3800X​
16​
DF-UD​
320​
256​
64​
64​
22​
5​
478​
TRUE​
RTX 2080 Ti​
11​
i7-9700K​
16​
DF-UD​
256​
256​
64​
64​
22​
20​
800​
TRUE​
RTX 2080 Ti​
11​
i9-9900K​
32​
LIAE-U​
256​
256​
64​
64​
22​
6​
700​
TRUE​
RTX 2080 Ti x2​
22​
R7-2700​
32​
LIAE​
192​
256​
64​
64​
22​
20​
1230​
TRUE​
RTX 3090​
24​
i7-9700K​
16​
DF-UD​
256​
256​
64​
64​
22​
16​
581​
TRUE​
RTX 3090​
24​
R9-3900X​
32​
LIAE-UD​
384​
384​
116​
116​
16​
6​
942​
TRUE​
Tesla P100​
16​
Colab​
16​
DF​
192​
768​
80​
80​
22​
8​
1000​
TRUE​
Tesla P100​
16​
Colab​
16​
DF​
192​
256​
64​
64​
22​
18​
1200​
TRUE​
Tesla P100​
16​
Colab​
16​
DF​
192​
256​
64​
64​
22​
12​
800​
TRUE​
Tesla P100​
16​
Colab​
16​
DF-UD​
256​
320​
96​
96​
22​
4​
460​
TRUE​
Titan RTX​
24​
RT-3970X​
128​
DF​
400​
256​
64​
64​
22​
6​
1350​
TRUE​
Titan RTX​
24​
E5-1650​
64​
DF​
256​
256​
64​
64​
22​
16​
4100​
TRUE​
Titan RTX​
24​
E5-1650​
64​
DF​
224​
256​
64​
64​
22​
20​
4600​
TRUE​
Titan RTX x2​
48​
RT-3970X​
128​
DF​
512​
256​
64​
64​
22​
6​
1700​
TRUE​
Titan RTX x2​
48​
RT-3970X​
128​
DF​
512​
256​
64​
64​
22​
8​
2100​
FALSE​
Titan RTX x2​
48​
RT-3970X​
128​
DF​
400​
256​
64​
64​
22​
12​
2200​
TRUE​

Please use my testing method to measure performance of your configuration, you need to run model twice, once in low and then in high load scenario.
Also make sure you are testing using the latest version of DFL and use the original builds, do not test on forks like MVE one, only iperov's version (unless you disable all additional features).

Low model load (you must test with those values):

RW: enabled
UY: disabled
EMP: disabled
LRD: disabled
GPU Opt on GPU: TRUE
GAN: disabled
Face Style Power: 0 (disabled)
Background Style Power: 0 (disabled)
TrueFace: 0 (disabled, only for DF archis)
Color Transfer: RCT
Clipgrad: FALSE

High model load (you must test with those values):

RW: disabled
UY: enabled
EMP: enabled
LRD: enabled (on GPU)
GPU Opt on GPU: TRUE
GAN: 0.1
GAN Dims: default value (16)
GAN Patch Size: 1/8 of model resolution
Face Style Power: 0.1
Background Style Power: 0 (disabled)
Color Transfer: RCT
Clipgrad: FALSE

If you want to provide additional settings using different paramaters for GAN, GAN DIMS, GAN PATCH SIZE, FSP, BSP, TF, CT, etc you can do so but they must be submitted along with standard testing method results in separate table using the 2nd template as a way to compare both.

Template with example values:
GPU​
VRAM​
CPU​
RAM​
OS​
Page File Size​
Model​
Architecture​
Resolution​
Batch Size (High Load)​
Batch Size (Low Load)​
Iteration Times (High Load)​
Iteration Times (Low Load)​
VRAM Usage Before Training​
VRAM Usage During Training​
AE Dims​
E Dims​
D Dims​
D Mask Dims​
Inter Dims​
Adabelief​
GAN Dims​
GAN Patch Size​
RTX 3090​
24​
i9-13900K​
64​
Windows 11​
256​
SAEHD​
LIAE-UDT​
384​
6​
12​
1000​
500​
1​
23.6​
320​
64​
64​
22​
-​
YES​
16​
48​
Batch Size (typical lowest)​
Iteration Times (Highest Load)​
VRAM Usage During Training​
GPU Opt on GPU​
Adabelief​
LRD​
GAN Dims​
GAN Patch Size​
GAN Power​
Face Style Power​
Background Style Power​
Color Transfer​
True Face​
Clipgrad​
4​
1400​
23.6​
FALSE​
NO​
On CPU​
24​
96​
0.2​
0.001​
0.0001​
LCT​
0.01​
TRUE​

回复

使用道具 举报

41

主题

741

帖子

5287

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
5287

万事如意节日勋章开心娱乐节日勋章

发表于 2024-1-20 01:17:26 | 显示全部楼层
猛男,你好啊

你有测试过Face Style Power吗?
是不是可以让脸型接近dst,减少虚影啊
回复 支持 反对

使用道具 举报

68

主题

469

帖子

1万

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
15907

万事如意节日勋章

 楼主| 发表于 2024-1-20 01:25:22 | 显示全部楼层
WaveBedo 发表于 2024-1-20 01:17
猛男,你好啊

你有测试过Face Style Power吗?

是色彩风格更接近dst吧。由于和dst衔接的更平滑,所以边界就弱了。当你羽化的时候体现优势
回复 支持 反对

使用道具 举报

41

主题

741

帖子

5287

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
5287

万事如意节日勋章开心娱乐节日勋章

发表于 2024-1-20 01:46:20 | 显示全部楼层
Lau9 发表于 2024-1-20 01:25
是色彩风格更接近dst吧。由于和dst衔接的更平滑,所以边界就弱了。当你羽化的时候体现优势 ...

好的
回复 支持 反对

使用道具 举报

2

主题

169

帖子

1089

积分

初级丹圣

Rank: 8Rank: 8

积分
1089
发表于 2024-1-20 01:56:38 | 显示全部楼层
WaveBedo 发表于 2024-1-20 01:17
猛男,你好啊

你有测试过Face Style Power吗?

哎,我的3070 淘汰好快啊
回复 支持 反对

使用道具 举报

2

主题

28

帖子

427

积分

初级丹师

Rank: 3Rank: 3

积分
427
发表于 2024-1-24 06:11:21 | 显示全部楼层
4090最高可以跑到什么参数?
回复 支持 反对

使用道具 举报

QQ|Archiver|手机版|deepfacelab中文网 |网站地图

GMT+8, 2024-11-23 10:43 , Processed in 0.138566 second(s), 33 queries .

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

快速回复 返回顶部 返回列表