|
发表于 2023-12-10 22:11:46
|
显示全部楼层
我RTX3080 用貓的DFL,開BS1還是會報錯
Error: failed to allocate memory14][0.6849]
[[node Mul_201 (defined at E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\optimizers\OptimizerBase.py:23) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
Errors may have originated from an input operation.
Input Source operations connected to node Mul_201:
gradients/Conv2D_16_grad/Conv2DBackpropFilter (defined at E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\ops\__init__.py:55)
Original stack trace for 'Mul_201':
File "threading.py", line 884, in _bootstrap
File "threading.py", line 916, in _bootstrap_inner
File "threading.py", line 864, in run
File "E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread
debug=debug)
File "E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\ModelBase.py", line 199, in __init__
self.on_initialize()
File "E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 564, in on_initialize
src_dst_loss_gv_op = self.src_dst_opt.get_update_op (nn.average_gv_list (gpu_G_loss_gvs))
File "E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\optimizers\AdaBelief.py", line 58, in get_update_op
g = self.tf_clip_norm(g, self.clipnorm, tf.cast(norm, g.dtype) )
File "E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\optimizers\OptimizerBase.py", line 23, in tf_clip_norm
then_expression = tf.scalar_mul(c / n, g)
File "E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
return target(*args, **kwargs)
File "E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_ops.py", line 625, in scalar_mul
return gen_math_ops.mul(scalar, x, name)
File "E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 6245, in mul
"Mul", x=x, y=y, name=name)
File "E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
attrs=attr_protos, op_def=op_def)
File "E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3569, in _create_op_internal
op_def=op_def)
File "E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 2045, in __init__
self._traceback = tf_stack.extract_stack_for_node(self._c_op)
Traceback (most recent call last):
File "E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1375, in _do_call
return fn(*args)
File "E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1360, in _run_fn
target_list, run_metadata)
File "E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1453, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: failed to allocate memory
[[{{node Mul_201}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\mainscripts\Trainer.py", line 131, in trainerThread
iter, iter_time = model.train_one_iter()
File "E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\ModelBase.py", line 480, in train_one_iter
losses = self.onTrainOneIter()
File "E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 774, in onTrainOneIter
src_loss, dst_loss = self.src_dst_train (warped_src, target_src, target_srcm, target_srcm_em, warped_dst, target_dst, target_dstm, target_dstm_em)
File "E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 584, in src_dst_train
self.target_dstm_em:target_dstm_em,
File "E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 968, in run
run_metadata_ptr)
File "E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1191, in _run
feed_dict_tensor, options, run_metadata)
File "E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1369, in _do_run
run_metadata)
File "E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1394, in _do_call
raise type(e)(node_def, op, message) # pylint: disable=no-value-for-parameter
tensorflow.python.framework.errors_impl.ResourceExhaustedError: failed to allocate memory
[[node Mul_201 (defined at E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\optimizers\OptimizerBase.py:23) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
Errors may have originated from an input operation.
Input Source operations connected to node Mul_201:
gradients/Conv2D_16_grad/Conv2DBackpropFilter (defined at E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\ops\__init__.py:55)
Original stack trace for 'Mul_201':
File "threading.py", line 884, in _bootstrap
File "threading.py", line 916, in _bootstrap_inner
File "threading.py", line 864, in run
File "E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread
debug=debug)
File "E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\ModelBase.py", line 199, in __init__
self.on_initialize()
File "E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 564, in on_initialize
src_dst_loss_gv_op = self.src_dst_opt.get_update_op (nn.average_gv_list (gpu_G_loss_gvs))
File "E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\optimizers\AdaBelief.py", line 58, in get_update_op
g = self.tf_clip_norm(g, self.clipnorm, tf.cast(norm, g.dtype) )
File "E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\optimizers\OptimizerBase.py", line 23, in tf_clip_norm
then_expression = tf.scalar_mul(c / n, g)
File "E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
return target(*args, **kwargs)
File "E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_ops.py", line 625, in scalar_mul
return gen_math_ops.mul(scalar, x, name)
File "E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 6245, in mul
"Mul", x=x, y=y, name=name)
File "E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
attrs=attr_protos, op_def=op_def)
File "E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3569, in _create_op_internal
op_def=op_def)
File "E:\AI\DF\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 2045, in __init__
self._traceback = tf_stack.extract_stack_for_node(self._c_op) |
|