deepfacelab中文网

 找回密码
 立即注册(仅限QQ邮箱)
查看: 768|回复: 6

有人知道发生什么错误吗

[复制链接]

8

主题

55

帖子

818

积分

高级丹师

Rank: 5Rank: 5

积分
818
 楼主| 发表于 2023-11-1 23:56:24 | 显示全部楼层 |阅读模式
星级打分
  • 1
  • 2
  • 3
  • 4
  • 5
平均分:NAN  参与人数:0  我的评分:未评
本帖最后由 nmsl7417 于 2023-11-2 00:07 编辑

Could not load library cudnn_ops_infer64_8.dll. Error code 1455
Please make sure cudnn_ops_infer64_8.dll is in your library path!
Process Process-24:
Traceback (most recent call last):
Process Process-21:
  File "D:\DF\_internal\DeepFaceLab\samplelib\SampleGeneratorFace.py", line 134, in batch_func
    x, = SampleProcessor.process ([sample], self.sample_process_options, self.output_sample_types, self.debug, ct_sample=ct_sample)
  File "D:\DF\_internal\DeepFaceLab\samplelib\SampleProcessor.py", line 145, in process
    img = get_eyes_mouth_mask()*mask
  File "D:\DF\_internal\DeepFaceLab\samplelib\SampleProcessor.py", line 77, in get_eyes_mouth_mask
    eyes_mask = LandmarksProcessor.get_image_eye_mask (sample_bgr.shape, sample_landmarks)
  File "D:\DF\_internal\DeepFaceLab\facelib\LandmarksProcessor.py", line 417, in get_image_eye_mask
    hull_mask = np.zeros( (h,w,1),dtype=np.float32)
MemoryError: Unable to allocate 1.00 MiB for an array with shape (512, 512, 1) and data type float32

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "multiprocessing\process.py", line 258, in _bootstrap
  File "multiprocessing\process.py", line 93, in run
  File "D:\DF\_internal\DeepFaceLab\core\joblib\SubprocessGenerator.py", line 54, in process_func
    gen_data = next (self.generator_func)
  File "D:\DF\_internal\DeepFaceLab\samplelib\SampleGeneratorFace.py", line 136, in batch_func
    raise Exception ("示例 %s 中出现异常。 错误:%s" % (sample.filename, traceback.format_exc() ) )
Exception: 示例 D:\DF\workspace\data_src\aligned\moyceci_3207487450398575684_0.jpg 中出现异常。 错误:Traceback (most recent call last):
  File "D:\DF\_internal\DeepFaceLab\samplelib\SampleGeneratorFace.py", line 134, in batch_func
    x, = SampleProcessor.process ([sample], self.sample_process_options, self.output_sample_types, self.debug, ct_sample=ct_sample)
  File "D:\DF\_internal\DeepFaceLab\samplelib\SampleProcessor.py", line 145, in process
    img = get_eyes_mouth_mask()*mask
  File "D:\DF\_internal\DeepFaceLab\samplelib\SampleProcessor.py", line 77, in get_eyes_mouth_mask
    eyes_mask = LandmarksProcessor.get_image_eye_mask (sample_bgr.shape, sample_landmarks)
  File "D:\DF\_internal\DeepFaceLab\facelib\LandmarksProcessor.py", line 417, in get_image_eye_mask
    hull_mask = np.zeros( (h,w,1),dtype=np.float32)
MemoryError: Unable to allocate 1.00 MiB for an array with shape (512, 512, 1) and data type float32

Traceback (most recent call last):
  File "D:\DF\_internal\DeepFaceLab\samplelib\SampleGeneratorFace.py", line 134, in batch_func
    x, = SampleProcessor.process ([sample], self.sample_process_options, self.output_sample_types, self.debug, ct_sample=ct_sample)
  File "D:\DF\_internal\DeepFaceLab\samplelib\SampleProcessor.py", line 145, in process
    img = get_eyes_mouth_mask()*mask
  File "D:\DF\_internal\DeepFaceLab\samplelib\SampleProcessor.py", line 80, in get_eyes_mouth_mask
    return np.clip(mask, 0, 1)
  File "<__array_function__ internals>", line 6, in clip
  File "D:\DF\_internal\python-3.6.8\lib\site-packages\numpy\core\fromnumeric.py", line 2097, in clip
    return _wrapfunc(a, 'clip', a_min, a_max, out=out, **kwargs)
  File "D:\DF\_internal\python-3.6.8\lib\site-packages\numpy\core\fromnumeric.py", line 58, in _wrapfunc
    return bound(*args, **kwds)
  File "D:\DF\_internal\python-3.6.8\lib\site-packages\numpy\core\_methods.py", line 141, in _clip
    um.clip, a, min, max, out=out, casting=casting, **kwargs)
  File "D:\DF\_internal\python-3.6.8\lib\site-packages\numpy\core\_methods.py", line 94, in _clip_dep_invoke_with_casting
    return ufunc(*args, out=out, **kwargs)
MemoryError: Unable to allocate 1.00 MiB for an array with shape (512, 512, 1) and data type float32

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "multiprocessing\process.py", line 258, in _bootstrap
  File "multiprocessing\process.py", line 93, in run
  File "D:\DF\_internal\DeepFaceLab\core\joblib\SubprocessGenerator.py", line 54, in process_func
    gen_data = next (self.generator_func)
  File "D:\DF\_internal\DeepFaceLab\samplelib\SampleGeneratorFace.py", line 136, in batch_func
    raise Exception ("示例 %s 中出现异常。 错误:%s" % (sample.filename, traceback.format_exc() ) )
Exception: 示例 D:\DF\workspace\data_src\aligned\IMG-20230511-WA0010_0.jpg 中出现异常。 错误:Traceback (most recent call last):
  File "D:\DF\_internal\DeepFaceLab\samplelib\SampleGeneratorFace.py", line 134, in batch_func
    x, = SampleProcessor.process ([sample], self.sample_process_options, self.output_sample_types, self.debug, ct_sample=ct_sample)
  File "D:\DF\_internal\DeepFaceLab\samplelib\SampleProcessor.py", line 145, in process
    img = get_eyes_mouth_mask()*mask
  File "D:\DF\_internal\DeepFaceLab\samplelib\SampleProcessor.py", line 80, in get_eyes_mouth_mask
    return np.clip(mask, 0, 1)
  File "<__array_function__ internals>", line 6, in clip
  File "D:\DF\_internal\python-3.6.8\lib\site-packages\numpy\core\fromnumeric.py", line 2097, in clip
    return _wrapfunc(a, 'clip', a_min, a_max, out=out, **kwargs)
  File "D:\DF\_internal\python-3.6.8\lib\site-packages\numpy\core\fromnumeric.py", line 58, in _wrapfunc
    return bound(*args, **kwds)
  File "D:\DF\_internal\python-3.6.8\lib\site-packages\numpy\core\_methods.py", line 141, in _clip
    um.clip, a, min, max, out=out, casting=casting, **kwargs)
  File "D:\DF\_internal\python-3.6.8\lib\site-packages\numpy\core\_methods.py", line 94, in _clip_dep_invoke_with_casting
    return ufunc(*args, out=out, **kwargs)
MemoryError: Unable to allocate 1.00 MiB for an array with shape (512, 512, 1) and data type float32

回复

使用道具 举报

1

主题

165

帖子

2346

积分

初级丹圣

Rank: 8Rank: 8

积分
2346
发表于 2023-11-2 00:22:28 | 显示全部楼层
设置虚拟内存,可以搜帖子,也有相关设置
回复 支持 反对

使用道具 举报

8

主题

55

帖子

818

积分

高级丹师

Rank: 5Rank: 5

积分
818
 楼主| 发表于 2023-11-2 00:51:14 | 显示全部楼层
番茄哥 发表于 2023-11-2 00:22
设置虚拟内存,可以搜帖子,也有相关设置

已经调整了但他还是显示这样

Error: 2 root error(s) found.
  (0) Resource exhausted: OOM when allocating tensor with shape[1024,114,114] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Pad_57 (defined at D:\DF\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:87) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

         [[concat_7/concat/_947]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

  (1) Resource exhausted: OOM when allocating tensor with shape[1024,114,114] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Pad_57 (defined at D:\DF\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:87) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

0 successful operations.
0 derived errors ignored.

Errors may have originated from an input operation.
Input Source operations connected to node Pad_57:
LeakyRelu_46 (defined at D:\DF\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py:29)

Input Source operations connected to node Pad_57:
LeakyRelu_46 (defined at D:\DF\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py:29)

Original stack trace for 'Pad_57':
  File "threading.py", line 884, in _bootstrap
  File "threading.py", line 916, in _bootstrap_inner
  File "threading.py", line 864, in run
  File "D:\DF\_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread
    debug=debug)
  File "D:\DF\_internal\DeepFaceLab\models\ModelBase.py", line 199, in __init__
    self.on_initialize()
  File "D:\DF\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 425, in on_initialize
    gpu_pred_dst_dst, gpu_pred_dst_dstm = self.decoder(gpu_dst_code)
  File "D:\DF\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "D:\DF\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 234, in forward
    self.out_conv1(x),
  File "D:\DF\_internal\DeepFaceLab\core\leras\layers\LayerBase.py", line 14, in __call__
    return self.forward(*args, **kwargs)
  File "D:\DF\_internal\DeepFaceLab\core\leras\layers\Conv2D.py", line 87, in forward
    x = tf.pad (x, padding, mode='CONSTANT')
  File "D:\DF\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
    return target(*args, **kwargs)
  File "D:\DF\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\array_ops.py", line 3528, in pad
    result = gen_array_ops.pad(tensor, paddings, name=name)
  File "D:\DF\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 6487, in pad
    "Pad", input=input, paddings=paddings, name=name)
  File "D:\DF\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
    attrs=attr_protos, op_def=op_def)
  File "D:\DF\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3569, in _create_op_internal
    op_def=op_def)
  File "D:\DF\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 2045, in __init__
    self._traceback = tf_stack.extract_stack_for_node(self._c_op)

Traceback (most recent call last):
  File "D:\DF\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1375, in _do_call
    return fn(*args)
  File "D:\DF\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1360, in _run_fn
    target_list, run_metadata)
  File "D:\DF\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1453, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
  (0) Resource exhausted: OOM when allocating tensor with shape[1024,114,114] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[{{node Pad_57}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

         [[concat_7/concat/_947]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

  (1) Resource exhausted: OOM when allocating tensor with shape[1024,114,114] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[{{node Pad_57}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

0 successful operations.
0 derived errors ignored.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "D:\DF\_internal\DeepFaceLab\mainscripts\Trainer.py", line 131, in trainerThread
    iter, iter_time = model.train_one_iter()
  File "D:\DF\_internal\DeepFaceLab\models\ModelBase.py", line 480, in train_one_iter
    losses = self.onTrainOneIter()
  File "D:\DF\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 774, in onTrainOneIter
    src_loss, dst_loss = self.src_dst_train (warped_src, target_src, target_srcm, target_srcm_em, warped_dst, target_dst, target_dstm, target_dstm_em)
  File "D:\DF\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 584, in src_dst_train
    self.target_dstm_em:target_dstm_em,
  File "D:\DF\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 968, in run
    run_metadata_ptr)
  File "D:\DF\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1191, in _run
    feed_dict_tensor, options, run_metadata)
  File "D:\DF\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1369, in _do_run
    run_metadata)
  File "D:\DF\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1394, in _do_call
    raise type(e)(node_def, op, message)  # pylint: disable=no-value-for-parameter
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
  (0) Resource exhausted: OOM when allocating tensor with shape[1024,114,114] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Pad_57 (defined at D:\DF\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:87) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

         [[concat_7/concat/_947]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

  (1) Resource exhausted: OOM when allocating tensor with shape[1024,114,114] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Pad_57 (defined at D:\DF\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:87) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

0 successful operations.
0 derived errors ignored.

Errors may have originated from an input operation.
Input Source operations connected to node Pad_57:
LeakyRelu_46 (defined at D:\DF\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py:29)

Input Source operations connected to node Pad_57:
LeakyRelu_46 (defined at D:\DF\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py:29)

Original stack trace for 'Pad_57':
  File "threading.py", line 884, in _bootstrap
  File "threading.py", line 916, in _bootstrap_inner
  File "threading.py", line 864, in run
  File "D:\DF\_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread
    debug=debug)
  File "D:\DF\_internal\DeepFaceLab\models\ModelBase.py", line 199, in __init__
    self.on_initialize()
  File "D:\DF\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 425, in on_initialize
    gpu_pred_dst_dst, gpu_pred_dst_dstm = self.decoder(gpu_dst_code)
  File "D:\DF\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "D:\DF\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 234, in forward
    self.out_conv1(x),
  File "D:\DF\_internal\DeepFaceLab\core\leras\layers\LayerBase.py", line 14, in __call__
    return self.forward(*args, **kwargs)
  File "D:\DF\_internal\DeepFaceLab\core\leras\layers\Conv2D.py", line 87, in forward
    x = tf.pad (x, padding, mode='CONSTANT')
  File "D:\DF\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
    return target(*args, **kwargs)
  File "D:\DF\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\array_ops.py", line 3528, in pad
    result = gen_array_ops.pad(tensor, paddings, name=name)
  File "D:\DF\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 6487, in pad
    "Pad", input=input, paddings=paddings, name=name)
  File "D:\DF\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
    attrs=attr_protos, op_def=op_def)
  File "D:\DF\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3569, in _create_op_internal
    op_def=op_def)
  File "D:\DF\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 2045, in __init__
    self._traceback = tf_stack.extract_stack_for_node(self._c_op)
回复 支持 反对

使用道具 举报

2

主题

70

帖子

1014

积分

初级丹圣

Rank: 8Rank: 8

积分
1014

万事如意节日勋章

发表于 2023-11-2 03:19:10 | 显示全部楼层
cudnn_ops_infer64_8.dll失败引起的错误。你是什么显卡。丹太重了?
回复 支持 反对

使用道具 举报

13

主题

528

帖子

3796

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
3796
发表于 2023-11-2 09:47:41 | 显示全部楼层
显卡内存(显存)不足。。。
回复 支持 反对

使用道具 举报

8

主题

109

帖子

1026

积分

初级丹圣

Rank: 8Rank: 8

积分
1026
发表于 2023-11-2 15:02:55 | 显示全部楼层
调小参数呗
回复 支持 反对

使用道具 举报

0

主题

41

帖子

497

积分

初级丹师

Rank: 3Rank: 3

积分
497
发表于 2023-11-2 17:34:08 | 显示全部楼层

盘符错了 应该换盘符到O,W,Z等盘
回复 支持 反对

使用道具 举报

QQ|Archiver|手机版|deepfacelab中文网 |网站地图

GMT+8, 2024-9-24 04:25 , Processed in 0.104880 second(s), 10 queries , Redis On.

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

快速回复 返回顶部 返回列表