本网站建设,律师个人网站模板,手机屏网站开发,桂林市有几个区和县文章目录 1.背景2.微调方式2.1 关键环境版本信息2.2 步骤2.2.1 下载llama-factory2.2.2 准备数据集2.2.3 微调模式2.2.4 微调脚本 2.3 踩坑经验2.3.1 问题一#xff1a;ValueError: Undefined dataset xxxx in dataset_info.json.2.3.2 问题二#xff1a; ValueError: Target… 文章目录 1.背景2.微调方式2.1 关键环境版本信息2.2 步骤2.2.1 下载llama-factory2.2.2 准备数据集2.2.3 微调模式2.2.4 微调脚本 2.3 踩坑经验2.3.1 问题一ValueError: Undefined dataset xxxx in dataset_info.json.2.3.2 问题二 ValueError: Target modules {c_attn} not found in the base model. Please check the target modules and try again.2.3.3 问题三 RuntimeError: The size of tensor a (1060864) must match the size of tensor b (315392) at non-singleton dimension 0。 2.4 实验2.4.1 实验1多GPU微调 1.背景
上一篇文件写到macbook微调Lora该微调方式同样适用于GPU只不过在train.py脚本中针对device调整为cuda即可。
但如果数据量过大的话单卡微调会存在瓶颈因此考虑多GPU进行微调。网上找了一圈多卡微调的常用方式采用deepseedLlama-factory。
本文主要记录该方式的微调情况仅为个人学习记录
2.微调方式
2.1 关键环境版本信息
模块版本python3.10CUDA12.6torch2.5.1peft0.12.0transformers4.46.2accelerate1.1.1trl0.9.6deepspeed0.15.4
2.2 步骤
2.2.1 下载llama-factory
git clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git
cd LLaMA-Factory
pip install -e .[torch,metrics]2.2.2 准备数据集
数据集采用网上流传的《甄嬛传》数据集结构如下数据集命名【huanhuan.json】
[{instruction: 小姐别的秀女都在求中选唯有咱们小姐想被撂牌子菩萨一定记得真真儿的——,input: ,output: 嘘——都说许愿说破是不灵的。},...
]其次还得准备数据集信息【dataset_info.json】,因为是本地微调所以微调时现访问dataset_info再指定到具体的数据集中。
{identity: {file_name: test_data.json}
}
注意文本的数据集的格式必须为json不然会报错。
2.2.3 微调模式
本次微调采用zero-3的方式因此在LLaMa-Factory目录下新增配置文件【ds_config_zero3.json】。
{fp16: {enabled: auto,loss_scale: 0,loss_scale_window: 1000,initial_scale_power: 16,hysteresis: 2,min_loss_scale: 1},bf16: {enabled: auto},optimizer: {type: AdamW,params: {lr: auto,betas: auto,eps: auto,weight_decay: auto}},scheduler: {type: WarmupLR,params: {warmup_min_lr: auto,warmup_max_lr: auto,warmup_num_steps: auto}},zero_optimization: {stage: 3,offload_optimizer: {device: none,pin_memory: true},offload_param: {device: none,pin_memory: true},overlap_comm: true,contiguous_gradients: true,sub_group_size: 1e9,reduce_bucket_size: auto,stage3_prefetch_bucket_size: auto,stage3_param_persistence_threshold: auto,stage3_max_live_parameters: 1e9,stage3_max_reuse_distance: 1e9,stage3_gather_16bit_weights_on_model_save: true},gradient_accumulation_steps: auto,gradient_clipping: auto,steps_per_print: 100,train_batch_size: auto,train_micro_batch_size_per_gpu: auto,wall_clock_breakdown: false
}
2.2.4 微调脚本
# run_train_bash.sh
#!/bin/bash
# 记录开始时间
START$(date %s.%N)CUDA_VISIBLE_DEVICES0,1,2,3,4,5,6,7 accelerate launch src/train.py \--deepspeed ds_config_zero3.json \--stage sft \--do_train True \--model_name_or_path /root/ai_project/fine-tuning-by-lora/models/model/qwen/Qwen2___5-7B-Instruct \--finetuning_type lora \--template qwen \--dataset_dir /root/ai_project/fine-tuning-by-lora/dataset/ \--dataset identity \--cutoff_len 1024 \--learning_rate 5e-04 \--num_train_epochs 10 \--max_samples 100000 \--per_device_train_batch_size 4 \--gradient_accumulation_steps 4 \--lr_scheduler_type cosine \--max_grad_norm 1.0 \--logging_steps 5 \--save_steps 100 \--warmup_steps 0 \--neftune_noise_alpha 0 \--lora_rank 8 \--lora_dropout 0.1 \--lora_alpha 32 \--lora_target q_proj,v_proj,k_proj,gate_proj,up_proj,o_proj,down_proj \--output_dir ./output/qwen_7b_ds/train_2024_02_27 \--bf16 True \--plot_loss True
# 记录结束时间
END$(date %s.%N)
# 计算运行时间
DUR$(echo $END - $START | bc)
# 输出运行时间
printf Execution time: %.6f seconds\n $DUR说明一下上述一些关键参数
参数版本–deepspeed指定deepspeed加速微调方式–model_name_or_path微调模型路径–finetuning_type微调方式这里用lora微调–template训练和推理时构造 prompt 的模板不同大语言模型的模板不一样这里用的是qwen–dataset_dir本地的数据集路径–dataset指定dataset_info.json中哪个数据集–lora_target应用 LoRA 方法的模块名称。–output_dir模型输出路径。
模型微调参数可以参考Llama-Factory参数介绍
其他参数其实就是常规使用peft进行lora微调的常见参数以及常见的微调参数可以对照如下。
lora_config LoraConfig(task_typeTaskType.CAUSAL_LM,target_modules[q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj],inference_modeFalse,r8,lora_alpha32,lora_dropout0.1
)2.3 踩坑经验
2.3.1 问题一ValueError: Undefined dataset xxxx in dataset_info.json.
如果你脚本的启动参数–dataset identity。而dataset_info.json中的数据信息没有“identity”这个key则会出现这个报错只要确保你dataset_info.json中存在该key即可。
2.3.2 问题二 ValueError: Target modules {‘c_attn’} not found in the base model. Please check the target modules and try again.
如果你脚本的启动参数–lora_target参数设为常见的c_attn参数则会报此错。处理方式还是调整参数使用Lora微调时的常见参数q_proj,v_proj,k_proj,gate_proj,up_proj,o_proj,down_proj。注意格式如果格式不对还是会报错。
2.3.3 问题三 RuntimeError: The size of tensor a (1060864) must match the size of tensor b (315392) at non-singleton dimension 0。
这种tensor的问题很可能是模型冲突的问题比如调到一半然后重新提调指到相同的路径。重新指定output路径即可。
2.4 实验
本次测试使用多GPU微调测试多GPU微调跟单GPU微调的性能对比。实验2后续补充。。。
2.4.1 实验1多GPU微调
使用3630条数据8卡微调微调参数如下,总共280步耗时
--learning_rate 5e-04
--num_train_epochs 10
--per_device_train_batch_size 4
--gradient_accumulation_steps 4 计算方式
280(step)3630[数据集]/(4[梯度]*4[每次训练采样batch数据])/8[8GPU]*10[轮次]训练结果 [INFO|trainer.py:2314] 2025-02-13 08:05:51,986 ***** Running training *****
[INFO|trainer.py:2315] 2025-02-13 08:05:51,986 Num examples 3,630
[INFO|trainer.py:2316] 2025-02-13 08:05:51,986 Num Epochs 10
[INFO|trainer.py:2317] 2025-02-13 08:05:51,986 Instantaneous batch size per device 4
[INFO|trainer.py:2320] 2025-02-13 08:05:51,986 Total train batch size (w. parallel, distributed accumulation) 128
[INFO|trainer.py:2321] 2025-02-13 08:05:51,986 Gradient Accumulation steps 4
[INFO|trainer.py:2322] 2025-02-13 08:05:51,986 Total optimization steps 280
.....{loss: 4.9293, grad_norm: 0.2562304304292013, learning_rate: 0.0005, epoch: 0.18}
{loss: 3.1626, grad_norm: 0.19361592540369985, learning_rate: 0.0005, epoch: 0.35}
{loss: 2.9427, grad_norm: 0.20313623353647364, learning_rate: 0.0005, epoch: 0.53}
{loss: 2.9178, grad_norm: 0.1633448296719697, learning_rate: 0.0005, epoch: 0.7}
{loss: 2.9116, grad_norm: 0.17241006366450623, learning_rate: 0.0005, epoch: 0.88}
{loss: 3.0758, grad_norm: 0.1853092845879873, learning_rate: 0.0005, epoch: 1.05}
{loss: 2.5562, grad_norm: 0.25384200353297537, learning_rate: 0.0005, epoch: 1.23}
{loss: 2.6158, grad_norm: 0.2876837326269363, learning_rate: 0.0005, epoch: 1.4}
{loss: 2.512, grad_norm: 0.2837102971247916, learning_rate: 0.0005, epoch: 1.58}
{loss: 2.5483, grad_norm: 0.30202190399292755, learning_rate: 0.0005, epoch: 1.75}
{loss: 2.5193, grad_norm: 0.3233037587534178, learning_rate: 0.0005, epoch: 1.93}
{loss: 2.513, grad_norm: 0.3515238818579015, learning_rate: 0.0005, epoch: 2.11}
{loss: 1.9465, grad_norm: 0.36555535286863944, learning_rate: 0.0005, epoch: 2.28}
{loss: 1.9132, grad_norm: 0.44229627583386516, learning_rate: 0.0005, epoch: 2.46}
{loss: 1.9235, grad_norm: 0.40111643921780515, learning_rate: 0.0005, epoch: 2.63}
{loss: 1.9685, grad_norm: 0.38583421690959196, learning_rate: 0.0005, epoch: 2.81}
{loss: 1.985, grad_norm: 0.3777334046946069, learning_rate: 0.0005, epoch: 2.98}
{loss: 1.538, grad_norm: 0.5845252817927833, learning_rate: 0.0005, epoch: 3.16}
{loss: 1.1791, grad_norm: 0.49414752481138235, learning_rate: 0.0005, epoch: 3.33}
{loss: 1.1892, grad_norm: 0.5207790387399577, learning_rate: 0.0005, epoch: 3.51}
{loss: 1.1712, grad_norm: 0.5654238235933979, learning_rate: 0.0005, epoch: 3.68}
{loss: 1.2197, grad_norm: 0.5001492538398, learning_rate: 0.0005, epoch: 3.86}
{loss: 1.2771, grad_norm: 0.4000143395083798, learning_rate: 0.0005, epoch: 4.04}
{loss: 0.6298, grad_norm: 0.5240283431664541, learning_rate: 0.0005, epoch: 4.21}
{loss: 0.5911, grad_norm: 0.47002369192531646, learning_rate: 0.0005, epoch: 4.39}
{loss: 0.5958, grad_norm: 0.5061747301822586, learning_rate: 0.0005, epoch: 4.56}
{loss: 0.6624, grad_norm: 0.5320579836394266, learning_rate: 0.0005, epoch: 4.74}
{loss: 0.6724, grad_norm: 0.517103117110723, learning_rate: 0.0005, epoch: 4.91}
{loss: 0.5444, grad_norm: 0.3714622914636231, learning_rate: 0.0005, epoch: 5.09}
{loss: 0.2655, grad_norm: 0.4465471808710968, learning_rate: 0.0005, epoch: 5.26}
{loss: 0.2743, grad_norm: 0.41505929687508386, learning_rate: 0.0005, epoch: 5.44}
{loss: 0.2786, grad_norm: 0.43996251312895884, learning_rate: 0.0005, epoch: 5.61}
{loss: 0.2785, grad_norm: 0.4471303138465939, learning_rate: 0.0005, epoch: 5.79}
{loss: 0.2788, grad_norm: 0.48705340679487363, learning_rate: 0.0005, epoch: 5.96}
{loss: 0.162, grad_norm: 0.2921252791608401, learning_rate: 0.0005, epoch: 6.14}
{loss: 0.1149, grad_norm: 0.30941692561321993, learning_rate: 0.0005, epoch: 6.32}
{loss: 0.1173, grad_norm: 0.29967155968778664, learning_rate: 0.0005, epoch: 6.49}
{loss: 0.13, grad_norm: 0.3630332521647509, learning_rate: 0.0005, epoch: 6.67}
{loss: 0.1344, grad_norm: 0.3125941281688891, learning_rate: 0.0005, epoch: 6.84}
{loss: 0.1441, grad_norm: 0.5404481434654501, learning_rate: 0.0005, epoch: 7.02}
{loss: 0.0567, grad_norm: 0.1855727739202254, learning_rate: 0.0005, epoch: 7.19}
{loss: 0.0702, grad_norm: 0.23380098002732216, learning_rate: 0.0005, epoch: 7.37}
{loss: 0.068, grad_norm: 0.23202593567669585, learning_rate: 0.0005, epoch: 7.54}
{loss: 0.0829, grad_norm: 0.23115965023606377, learning_rate: 0.0005, epoch: 7.72}
{loss: 0.0766, grad_norm: 0.23135481635275945, learning_rate: 0.0005, epoch: 7.89}
{loss: 0.067, grad_norm: 0.13494924636148561, learning_rate: 0.0005, epoch: 8.07}
{loss: 0.0396, grad_norm: 0.18481019773823124, learning_rate: 0.0005, epoch: 8.25}
{loss: 0.0429, grad_norm: 0.19484298588581364, learning_rate: 0.0005, epoch: 8.42}
{loss: 0.0416, grad_norm: 0.17873844875438857, learning_rate: 0.0005, epoch: 8.6}
{loss: 0.0454, grad_norm: 0.17303531479845663, learning_rate: 0.0005, epoch: 8.77}
{loss: 0.0485, grad_norm: 0.17425356837750286, learning_rate: 0.0005, epoch: 8.95}
{loss: 0.0334, grad_norm: 0.0869599535276032, learning_rate: 0.0005, epoch: 9.12}
{loss: 0.0255, grad_norm: 0.163465911292555, learning_rate: 0.0005, epoch: 9.3}
{loss: 0.0293, grad_norm: 0.16522989964282914, learning_rate: 0.0005, epoch: 9.47}
{loss: 0.0265, grad_norm: 0.15019554228481286, learning_rate: 0.0005, epoch: 9.65}
{loss: 0.0326, grad_norm: 0.14628796123788834, learning_rate: 0.0005, epoch: 9.82}.....***** train metrics *****epoch 9.8246total_flos 153160GFtrain_loss 1.0567train_runtime 1:01:16.28train_samples_per_second 9.874train_steps_per_second 0.076
Figure saved at: ./output/qwen_7b_ds/train_2024_02_27/training_loss.pngExecution time: 3717.986219 seconds