-
The University of Washington promotes efficient large model tuning method "proxy tuning"
The University of Washington has introduced a more efficient large model tuning method called "proxy tuning". This method guides the prediction of the basic model by comparing the prediction results of a small tuned model and an untuned model, and achieves model tuning without touching the internal weights of the model. With the development of generative AI products such as ChatGPT, the parameters of the basic model continue to increase, so weight tuning requires a lot of time and computing power. To improve the tuning efficiency, this method can better retain the training knowledge during decoding while retaining the advantages of larger-scale pre-training. Researchers used the 13B and 70B original models of LlAMA-2...- 2.1k
❯
Search
Scan to open current page
Top
Checking in, please wait
Click for today's check-in bonus!
You have earned {{mission.data.mission.credit}} points today!
My Coupons
-
¥CouponsLimitation of useExpired and UnavailableLimitation of use
before
Limitation of usePermanently validCoupon ID:×Available for the following products: Available for the following products categories: Unrestricted use:Available for all products and product types
No coupons available!
Unverify
Daily tasks completed: