分享
华为诺亚方舟实验室首席科学家刘群教授谈ChatGPT技术-83页-WN6.pdf
下载文档

ID:3496107

大小:3MB

页数:83页

格式:PDF

时间:2024-05-16

收藏 分享赚钱
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,汇文网负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
网站客服:3074922707
华为 诺亚方舟 实验室 首席 科学家 教授 ChatGPT 技术 83 WN6
ChatGPT技术分析刘群 LIU Qun华为诺亚方舟实验室 Huawei Noahs Ark Lab在线讲座(an online lecture)2023-02-16群内每日免费分享5份+最新资料 群内每日免费分享5份+最新资料 300T网盘资源+4040万份行业报告为您的创业、职场、商业、投资、亲子、网赚、艺术、健身、心理、个人成长 全面赋能!添加微信,备注“入群”立刻免费领取 立刻免费领取 200套知识地图+最新研报收钱文案、增长黑客、产品运营、品牌企划、营销战略、办公软件、会计财务、广告设计、摄影修图、视频剪辑、直播带货、电商运营、投资理财、汽车房产、餐饮烹饪、职场经验、演讲口才、风水命理、心理思维、恋爱情趣、美妆护肤、健身瘦身、格斗搏击、漫画手绘、声乐训练、自媒体打造、效率软件工具、游戏影音扫码先加好友,以备不时之需扫码先加好友,以备不时之需行业报告/思维导图/电子书/资讯情报行业报告/思维导图/电子书/资讯情报致终身学习者社群致终身学习者社群关注公众号获取更多资料关注公众号获取更多资料ChatGPT概览ChatGPT的出色表现ChatGPT的关键技术ChatGPT的不足之处ChatGPT未来发展方向ContentChatGPT概览ChatGPT的出色表现ChatGPT的关键技术ChatGPT的不足之处ChatGPT未来发展方向ContentChatGPT轰动效应用户数:5天100万,2个月达到1亿所有人都开始讨论ChatGPT,传播速度堪比新冠病毒Google内部拉响红色警报Google紧急仅仅发布Bard,但因发布现场出现错误导致股票蒸发8%微软追加投资OpenAI一百亿美元微软迅速推出加载了ChatGPT的New Bing,并计划将ChatGPT接入Office套件国内外大厂迅速跟进1 total:40ChatGPT官方博客:简介TryChatGPT Learn moreAPIRESEARCHABOUTBLOGChatGPT:OptimizingLanguage Modelsfor DialogueWeve trained a model called ChatGPT which interacts in aconversational way.The dialogue format makes it possible forChatGPT to answer followup questions,admit its mistakes,challenge incorrect premises,and reject inappropriate requests.ChatGPT is a sibling model to InstructGPT,which is trained tofollow an instruction in a prompt and provide adetailedresponse.November 30,202213 minute readWe are excited to introduce ChatGPT to get users feedback and learn about its strengths and weaknesses.During the researchpreview,usage of ChatGPT is free.Try it now at .SamplesTRY CHATGPTIn the following sample,ChatGPT asks clarifying questions to debug code.(?/?)USERthis code is not working like i expect how do i fixit?resultWorkerErr:=make(chan error)defer close(resultWorkerErr)go func()defer cancel()resultWorkerErr-b.resultWorker(ctx)We are excited to introduce ChatGPT to get users feedback and learn about its strengths and weaknesses.During the researchpreview,usage of ChatGPT is free.Try it now at .SamplesTRY CHATGPTIn the following sample,ChatGPT asks clarifying questions to debug code.(?/?)USERthis code is not working like i expect how do i fixit?resultWorkerErr:=make(chan error)defer close(resultWorkerErr)go func()defer cancel()resultWorkerErr 0PLM(s)=110(1)total:40语言模型定义Language Modeling is the task of predicting what word comes next.the students opened their _More formally:given a sequence of words ,compute the probability distribution of the next word :where can be any word in the vocabularyA system that does this is called a Language Model.Language Modelingexamsmindslaptopsbooks15Christopher Manning,Natural Language Processing with Deep Learning,Standford U.CS224n10(2)total:40语言模型的发展n元语言模型神经网络语言模型循环神经网络语言模型Transformer语言模型预训练语言模型(Pre-trained Language Models,PLMs)BERT:双向掩码语言模型GPT:纯解码器语言模型大型生成式预训练语言模型(Large Language Models,LLMs)GPT-3ChatGPT11 total:40预训练语言模型(Pre-trained Language Models,PLMs)典型代表:ELMo,BERT,GPTPre-training-then-fine-tuning范式将在pre-training阶段学习到的语言表示迁移到下游任务12 total:40Transformer模型Liliang Wen,Generalized Language Models:Ulmfit&OpenAI GPT(blog)13 total:40自注意力机制(self-attention)(Vaswani et al.,2017)14(1)total:40自注意力机制(self-attention)每个token是通过所有词动态加权得到动态权重会随着输入的改变而变化(BertViz tool,Vig et al.,2019)14(2)total:40ChatGPT的关键技术预训练语言模型(Pre-trained Language Models,PLMs)大型生成式预训练语言模型(Large Language Models,LLMs)人类反馈强化学习(RLHF)Content大型生成式预训练语言模型(LLM)预训练语言模型大型生成式预训练语言模型Pre-trained LanguageModels,PLMsLargeLanguageModels,LLMs典型模型ELMo,BERT,GPT-2GPT-3模型结构BiLSTM,TransformerTransformer注意力机制双向、单向单向训练方式Mask&PredictAutoregressive Generation擅长任务类型理解生成模型规模1-10亿参数10-x1000亿参数下游任务应用方式Fine-tuningFine-tuning&Prompting涌现能力小数据领域迁移Zero/Few-shot Learning,In-context Learning,Chain-of-Thought15 total:40GPT-3简介GPT-3(Generative Pre-trained Transformer 3)是一个自回归语言模型,目的是为了使用深度学习生成人类可以理解的自然语言。GPT-3是由在旧金山的人工智能公司OpenAI训练与开发,模型设计基于谷歌开发的变换语言模型。GPT-3的神经网络包含1750亿个参数,在发布时为参数最多的神经网络模型。OpenAI于2020年5月发表GPT-3的论文,在次月为少量公司与开发团队发布应用程序界面的测试版。微软在2020年9月22日宣布取得了GPT-3的独家授权。16 total:40GPT-3模型家族ELMo:93M params,2-layer biLSTMBERT-base:110M params,12-layer TransformerBERT-large:340M params,24-layer TransformerThe language model“scaling wars”!Mohit Iyyer,slides for CS685 Fall 2020,University of Massachusetts Amherst17 total:40GPT-3数据来源DatasetTokens(billion)AssumptionsTokens per byte(Tokens/bytes)RatioSize(GB)Web dataWebText2Books1Books2Wikipedia410B19B12B55B3B25%WebTextGutenbergBibliotikSee RoBERTa0.710.380.570.540.261:1.91:2.61:1.751:1.841:3.8570502110111.4Total499B753.4GBTable.GPT-3 Datasets.Disclosed in bold.Determined in italics.Alan D.Thompson,GPT-3.5+ChatGPT:An illustrated overview,https:/lifearchitect.ai/chatgpt/18(1)total:40GPT-3数据来源数据来源:跟其他大规模语言模型的对比18(2)total:40GPT-3训练数据量看一下大语言模型训练的token数量:GPT-3(2020.5)是500B(5000亿),目前最新数据为止;Google的PaLM(2022.4)是780B;DeepMind的Chinchilla是1400B;Pangu-公布了训练的token数,约为40B,不到GPT-3的十分之一;国内其他的大模型都没有公布训练的token数。19(1)total:40GPT-3训练数据量ELMo:1B training tokensBERT:3.3B training tokensRoBERTa:30B training tokensThe language model“scaling wars”!Mohit Iyyer,slides for CS685 Fall 2020,University of Massachusetts Amherst19(2)total:40GPT-3算力消耗The language model“scaling wars”!Log scale!Mohit Iyyer,slides for CS685 Fall 2020,University of Massachusetts Amherst20 total:40Few-shot and zero-shot learning(in-context learning)Brown et al.,Language Models are Few-Shot Learners,arXiv:2005.14165,202121(1)total:40Few-shot and zero-shot learning(in-context learning)Brown et al.,Language Models are Few-Shot Learners,arXiv:2005.14165,202121(2)total:40Chain-of-thoughtPreprint:https:/arxiv.org/pdf/2201.11903.pdf22 total:40Magic word:Lets think step-by-step(c)Zero-shotQ:A juggler can juggle 16 balls.Half of the balls are golf balls,and half of the golf balls are blue.How many blue golf balls are there?A:The answer(arabic numerals)is(Output)8 X(d)Zero-shot-CoT(Ours)Q:A juggler can juggle 16 balls.Half of the balls are golf balls,and half of the golf balls are blue.How many blue golf balls are there?A:Lets think step by step.(Output)There are 16 balls in total.Half of the balls are golf balls.That means that there are 8 golf balls.Half of the golf balls are blue.That means that there are 4 blue golf balls.Q:Roger has 5 tennis balls.He buys 2 more cans of tennis balls.Each can has 3 tennis balls.How many tennis balls does he have now?A:Roger started with 5 balls.2 cans of 3 tennis balls each is 6 tennis balls.5+6=11.The answer is 11.Q:A juggler can juggle 16 balls.Half of the balls are golf balls,and half of the golf balls are blue.How many blue golf balls are there?A:(Output)The juggler can juggle 16 balls.Half of the balls are golf balls.So there are 16/2=8 golf balls.Half of the golf balls are blue.So there are 8/2=4 blue golf balls.The answer is 4.(b)Few-shot-CoT(a)Few-shotQ:Roger has 5 tennis balls.He buys 2 more cans of tennis balls.Each can has 3 tennis balls.How many tennis balls does he have now?A:The answer is 11.Q:A juggler can juggle 16 balls.Half of the balls are golf balls,and half of the golf balls are blue.How many blue golf balls are there?A:(Output)The answer is 8.XFigure 1:Example inputs and outputs of GPT-3 with(a)standard Few-shot(Brown et al.,2020),(b)Few-shot-CoT(Wei et al.,2022),(c)standard Zero-shot,and(d)ours(Zero-shot-CoT).Similar toFew-shot-CoT,Zero-shot-CoT facilitates multi-step reasoning(blue text)and reach correct answerwhere standard prompting fails.Unlike Few-shot-CoT using step-by-step reasoning examplespertask,ours does not need any examples and just uses the same prompt“Lets think step by step”acrossall tasks(arithmetic,symbolic,commonsense,and other logical reasoning tasks).In contrast to the excellent performance of LLMs in intuitive and single-step system-1 Stanovichand West,2000 tasks with task-specific few-shot or zero-shot prompting Liu et al.,2021b,evenlanguage models at the scale of 100B or more parameters had struggled on system-2 tasks requiringslow and multi-step reasoning Rae et al.,2021.To address this shortcoming,Wei et al.2022,Wang et al.2022 have proposed chain of thought prompting(CoT),which feed LLMs with thestep-by-step reasoning examples rather than standard question and answer examples(see Fig.1-a).Such chain of thought demonstrations facilitate models to generate a reasoning path that decomposesthe complex reasoning into multiple easier steps.Notably with CoT,the reasoning performance thensatisfies the scaling laws better and jumps up with the size of the language models.For example,when combined with the 540B parameter PaLM model Chowdhery et al.,2022,chain of thoughtprompting significantly increases the performance over standard few-shot prompting across severalbenchmark reasoning tasks,e.g.,GSM8K(17.9%58.1%).While the successes of CoT prompting Wei et al.,2022,along those of many other task-specificprompting work Gao et al.,2021,Schick and Schtze,2021,Liu et al.,2021b,are often attributedto LLMs ability for few-shot learning Brown et al.,2020,we show that LLMs are decent zero-shotreasoners by adding a simple prompt,Lets think step by step,to facilitate step-by-step thinking beforeanswering each question(see Figure 1).Despite the simplicity,our Zero-shot-CoT successfullygenerates a plausible reasoning path in a zero-shot manner and reaches the correct answer in aproblem where the standard zero-shot approach fails.Importantly,our Zero-shot-CoT is versatile andtask-agnostic,unlike most prior task-specific prompt engineering in the forms of examples(few-shot)or templates(zero-shot)Liu et al.,2021b:it can facilitate step-by-step answers across variousreasoning tasks,including arithmetic(MultiArith Roy and Roth,2015,GSM8K Cobbe et al.,2021,AQUA-RAT Ling et al.,2017,and SVAMP Patel et al.,2021),symbolic(Last letter and Coinflip),commonsense reasoning(CommonSenseQA Talmor et al.,2019 and Strategy QA Geva et al.,2021),and other logical reasoning tasks(Date understanding and Tracking Shuffled Objects fromBIG-bench big,2021)without modifying the prompt per task.We empirically evaluate Zero-shot-CoT against other prompting baselines in Figure 1.While ourZero-shot-CoT underperforms Few-shot-CoT with carefully-crafted and task-specific step-by-stepexamples,Zero-shot-CoT achieves enormous score gains compared to the zero-shot baseline,e.g.from 17.7%to 78.7%on MultiArith and from 10.4%to 40.7%on GSM8K with 175B parameter2Preprint:http:/arxiv.org/abs/2205.1191623 total:40Emergence and homogenizationBommasani et al.,On the Opportunities and Risks of Foundation Models,arXiv:2108.07258 cs.LG24(1)total:40Emergence and homogenizationBommasani et al.,On the Opportunities and Risks of Foundation Models,arXiv:2108.07258 cs.LG24(2)total:40The scale matters:the emergence of abilities101810201022102401020304050Accuracy(%)(A)Mod.arithmetic101810201022102401020304050BLEU(%)(B)IPA transliterate101810201022102401020304050Exact match(%)(C)Word unscrambleLaMDAGPT-3GopherChinchillaPaLMRandom101810201022102401020304050Exact match(%)(D)Figure of speech102010221024010203040506070Accuracy(%)(E)TruthfulQA102010221024010203040506070Model scale(training FLOPs)Accuracy(%)(F)Grounded mappings102010221024010203040506070Accuracy(%)(G)Multi-task NLU102010221024010203040506070Accuracy(%)(H)Word in contextFigure 2:Eight examples of emergence in the few-shot prompting setting.Each point is a separate model.Theability to perform a task via few-shot prompting is emergent when a language model achieves random performanceuntil a certain scale,after which performance significantly increases to well-above random.Note that modelsthat used more training compute also typically have more parametershence,we show an analogous figure withnumber of model parameters instead of training FLOPs as the x-axis in Figure 7.AD:BIG-Bench(2022),2-shot.E:Lin et al.(2021)and Rae et al.(2021).F:Patel and Pavlick(2022).G:Hendrycks et al.(2021),Rae et al.(2021),and Hoffmann et al.(2022).H:Brown et al.(2020),Hoffmann et al.(2022),and Chowdhery et al.(2022)on theWiC benchmark(Pilehvar and Camacho-Collados,2019).The ability to perform a task via few-shot prompt-ing is emergent when a model has random per-formance until a certain scale,after which perfor-mance increases to well-above random.Figure 2shows eight such emergent abilities spanning fivelanguage model families from various work.BIG-Bench.Figure 2AD depicts four emergentfew-shot promptedtasks from BIG-Bench,acrowd-sourced suite of over 200 benchmarks for languagemodel evaluation(BIG-Bench,2022).Figure 2Ashows an arithmetic benchmark that tests 3-digitaddition and subtraction,as well as 2-digit multi-plication.GPT-3 and LaMDA(Thoppilan et al.,2022)have close-to-zero performance for severalorders of magnitude of training compute,beforeperformance jumps to sharply above random at2 1022training FLOPs(13B parameters)for GPT-3,and1023training FLOPs(68B parameters)forLaMDA.Similar emergent behavior also occurs ataround the same model scale for other tasks,suchas transliterating from the International PhoneticAlphabet(Figure 2B),recovering a word from itsscrambled letters(Figure 2C),and detecting fig-ures of speech(Figure 2D).Even more emergentabilities from BIG-Bench are given in Table 1.TruthfulQA.Figure 2E shows few-shot promptedperformance on the TruthfulQA benchmark,whichmeasures the ability to answer questions truthfully(Lin et al.,2021).This benchmark is adversari-ally curated against GPT-3 models,which do notperform above random,even when scaled to thelargest model size.Small Gopher models also donot perform above random until scaled up to thelargest model of5 1023training FLOPs(280Bparameters),for which performance jumps to morethan 20%above random(Rae et al.,2021).Grounded conceptual mappings.Figure 2Fshows the task of grounded conceptual mappings,where language models must learn to map a con-ceptual domain,such as a cardinal direction,rep-resented in a textual grid world(Patel and Pavlick,2022).Again,performance only jumps to aboverandom using the largest GPT-3 model.Multi-task language understanding.Figure 2Gshows the Massive Multi-task Language Under-standing(MMLU)benchmark,which aggregates57 tests covering a range of topics including math,history,law,and more(Hendrycks et al.,2021).ForGPT-3,Gopher,and Chinchilla,models of1022training FLOPs(10B parameters)or smaller donotperformbetterthanguessingonaverageoverallthe topics,scaling up to35 1023training FLOPs(70B280B parameters)enables performance tosubstantially surpass random.This result is strik-ing because it could imply that the ability to solveknowledge-based questions spanning a large col-lection of topics might require scaling up past thisthreshold(for dense language models without re-trieval or access to external memory).Word in Context.Finally,Figure 2H shows theWord in Context(WiC)benchmark(Pilehvar andCamacho-Collados,2019),which is a semantic un-derstanding benchmark.Notably,GPT-3 and Chin-chilla fail to achieve one-shot performance of bet-ter than random,even when scaled to their largestmodel size of5 1023FLOPs.Although these re-sults so far may suggest that scaling alone may notenable models to solve WiC,above-random perfor-mance eventually emerged when PaLM was scaledto2.51024FLOPs(540B parameters),which wasm

此文档下载收益归作者所有

下载文档
你可能关注的文档
收起
展开