|
开启训练后 出现了如下提示 但是进度条还在继续走 这个影响 训练吗?
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
For effortless bug reporting copy-paste your error into this form: https://docs.google.com/forms/d/ ... iewform?usp=sf_link
================================================================================
CUDA SETUP: Loading binary D:\novelai-webui-aki-v2\models\lora-scripts\venv\lib\site-packages\bitsandbytes\libbitsandbytes_cuda116.dll...
use 8-bit AdamW optimizer | {}
override steps. steps for 10 epochs is / 指定エポックまでのステップ数: 1280
running training / 学習開始
num train images * repeats / 学習画像の数×繰り返し回数: 128
num reg images / 正則化画像の数: 0
num batches per epoch / 1epochのバッチ数: 128
num epochs / epoch数: 10
batch size per device / バッチサイズ: 1
gradient accumulation steps / 勾配を合計するステップ数 = 1
total optimization steps / 学習ステップ数: 1280
steps: 0%| | 0/1280 [00:00<?, ?it/s]epoch 1/10
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
D:\novelai-webui-aki-v2\models\lora-scripts\venv\lib\site-packages\xformers\ops\fmha\flash.py:338: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
and inp.query.storage().data_ptr() == inp.key.storage().data_ptr()
steps: 1%|▊ | 17/1280 [02:09<2:39:58, 7.60s/it, loss=nan]
|
|