当前位置: 首页 > news >正文

python做个人网站全媒体运营师证书怎么考

python做个人网站,全媒体运营师证书怎么考,网站制作公司 知道万维科技,网址搜索栏在哪文章目录 前言代码 前言 当我们需要对大规模的数据向量化以存到向量数据库中时#xff0c;且服务器上有多个GPU可以支配#xff0c;我们希望同时利用所有的GPU来并行这一过程#xff0c;加速向量化。 代码 就几行代码#xff0c;不废话了 from sentence_transformers i… 文章目录 前言代码 前言 当我们需要对大规模的数据向量化以存到向量数据库中时且服务器上有多个GPU可以支配我们希望同时利用所有的GPU来并行这一过程加速向量化。 代码 就几行代码不废话了 from sentence_transformers import SentenceTransformer#Important, you need to shield your code with if __name__. Otherwise, CUDA runs into issues when spawning new processes. if __name__ __main__:#Create a large list of 100k sentencessentences [This is sentence {}.format(i) for i in range(100000)]#Define the modelmodel SentenceTransformer(all-MiniLM-L6-v2)#Start the multi-process pool on all available CUDA devicespool model.start_multi_process_pool()#Compute the embeddings using the multi-process poolemb model.encode_multi_process(sentences, pool)print(Embeddings computed. Shape:, emb.shape)#Optional: Stop the proccesses in the poolmodel.stop_multi_process_pool(pool)注意一定要加if __name__ __main__:这一句不然报如下错 RuntimeError: An attempt has been made to start a new process before thecurrent process has finished its bootstrapping phase.This probably means that you are not using fork to start yourchild processes and you have forgotten to use the proper idiomin the main module:if __name__ __main__:freeze_support()...The freeze_support() line can be omitted if the programis not going to be frozen to produce an executable.其实官方已经给出代码啦我只不过复制粘贴了一下代码位置computing_embeddings_multi_gpu.py 官方还给出了流式encode的例子也是多GPU并行的如下 from sentence_transformers import SentenceTransformer, LoggingHandler import logging from datasets import load_dataset from torch.utils.data import DataLoader from tqdm import tqdmlogging.basicConfig(format%(asctime)s - %(message)s,datefmt%Y-%m-%d %H:%M:%S,levellogging.INFO,handlers[LoggingHandler()])#Important, you need to shield your code with if __name__. Otherwise, CUDA runs into issues when spawning new processes. if __name__ __main__:#Set paramsdata_stream_size 16384 #Size of the data that is loaded into memory at oncechunk_size 1024 #Size of the chunks that are sent to each processencode_batch_size 128 #Batch size of the model#Load a large dataset in streaming mode. more info: https://huggingface.co/docs/datasets/streamdataset load_dataset(yahoo_answers_topics, splittrain, streamingTrue)dataloader DataLoader(dataset.with_format(torch), batch_sizedata_stream_size)#Define the modelmodel SentenceTransformer(all-MiniLM-L6-v2)#Start the multi-process pool on all available CUDA devicespool model.start_multi_process_pool()for i, batch in enumerate(tqdm(dataloader)):#Compute the embeddings using the multi-process poolsentences batch[best_answer]batch_emb model.encode_multi_process(sentences, pool, chunk_sizechunk_size, batch_sizeencode_batch_size)print(Embeddings computed for 1 batch. Shape:, batch_emb.shape)#Optional: Stop the proccesses in the poolmodel.stop_multi_process_pool(pool)官方案例computing_embeddings_streaming.py ----------------------------------------------------------------------------- | NVIDIA-SMI 515.105.01 Driver Version: 515.105.01 CUDA Version: 11.7 | |--------------------------------------------------------------------------- | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | || | 0 NVIDIA A800-SXM... On | 00000000:23:00.0 Off | 0 | | N/A 58C P0 297W / 400W | 75340MiB / 81920MiB | 100% Default | | | | Disabled | --------------------------------------------------------------------------- | 1 NVIDIA A800-SXM... On | 00000000:29:00.0 Off | 0 | | N/A 71C P0 352W / 400W | 80672MiB / 81920MiB | 100% Default | | | | Disabled | --------------------------------------------------------------------------- | 2 NVIDIA A800-SXM... On | 00000000:52:00.0 Off | 0 | | N/A 68C P0 398W / 400W | 75756MiB / 81920MiB | 100% Default | | | | Disabled | --------------------------------------------------------------------------- | 3 NVIDIA A800-SXM... On | 00000000:57:00.0 Off | 0 | | N/A 58C P0 341W / 400W | 75994MiB / 81920MiB | 100% Default | | | | Disabled | --------------------------------------------------------------------------- | 4 NVIDIA A800-SXM... On | 00000000:8D:00.0 Off | 0 | | N/A 56C P0 319W / 400W | 70084MiB / 81920MiB | 100% Default | | | | Disabled | --------------------------------------------------------------------------- | 5 NVIDIA A800-SXM... On | 00000000:92:00.0 Off | 0 | | N/A 70C P0 354W / 400W | 76314MiB / 81920MiB | 100% Default | | | | Disabled | --------------------------------------------------------------------------- | 6 NVIDIA A800-SXM... On | 00000000:BF:00.0 Off | 0 | | N/A 73C P0 360W / 400W | 75876MiB / 81920MiB | 100% Default | | | | Disabled | --------------------------------------------------------------------------- | 7 NVIDIA A800-SXM... On | 00000000:C5:00.0 Off | 0 | | N/A 57C P0 364W / 400W | 80404MiB / 81920MiB | 100% Default | | | | Disabled | ---------------------------------------------------------------------------嘎嘎快啊
http://www.w-s-a.com/news/240329/

相关文章:

  • php做网站的分站wordpress边下边看
  • 杭州建设实名制报备网站Wordpress外贸网站搭建公司
  • 山西云起时网站建设计算机网站开发实现总结
  • 一个网站做两个优化可以做吗永清网站建设
  • wordpress英文采集wordpress seo 链接
  • 进入建设银行的网站就打不了字工程建设标准化网站
  • 杭州网站推广大全网站建设演讲稿
  • 厦门网站的制作太仓专业网站建设
  • 天津公司网站建设公司哪家好在阿里巴巴国际网站上需要怎么做
  • 网站关键词seo推广公司哪家好无锡市无锡市住房和城乡建设局网站
  • 开远市新农村数字建设网站网站如何做QQ登录
  • 自己做个网站教程高端网站开发哪家强
  • 网站模板免费下载中文版大连网站建设哪家专业
  • 网站建设的基本代理公司注册公司坑人
  • 企业网站被黑后如何处理wordpress邮件发送类
  • 北京网站的网站建设公司建设工程竣工验收消防备案网站
  • 淄博市 网站建设报价wordpress里的发消息给我
  • 网站下拉菜单怎么做游戏网站模板免费下载
  • 阿里云上做网站套模板怎么做一个网站开发小组
  • 营销型网站源码下载青岛做网站建设的公司哪家好
  • 迁西网站定制怎么制作网址内容
  • 深圳装饰公司网站宁波网站建设哪里有
  • 建站网站破解版怎么看自己的网站是用什么做的
  • 做微商那个网站好织梦模板更新网站
  • 网站注册表单怎么做手机做网站需要多少天
  • 书店商城网站html模板下载企业网站建设方案书范文
  • 建设网站是普通办公吗快速排名seo软件
  • 大型外贸网站建设网站建设图片尺寸要求
  • 网站建设可信赖北京网站开发月薪
  • 专门做lolh的网站wordpress 模版 cho's