Abstract:Humans shift between different personas depending on social context. Large Language Models (LLMs) demonstrate a similar flexibility in adopting different personas and behaviors. Existing approaches, however, typically adapt such behavior through external knowledge such as prompting, retrieval-augmented generation (RAG), or fine-tuning. We ask: do LLMs really need external context or parameters to adapt to different behaviors, or do they already have such knowledge embedded in their parameters? In this work, we show that LLMs already contain persona-specialized subnetworks in their parameter space. Using small calibration datasets, we identify distinct activation signatures associated with different personas. Guided by these statistics, we develop a masking strategy that isolates lightweight persona subnetworks. Building on the findings, we further discuss: how can we discover opposing subnetwork from the model that lead to binary-opposing personas, such as introvert-extrovert? To further enhance separation in binary opposition scenarios, we introduce a contrastive pruning strategy that identifies parameters responsible for the statistical divergence between opposing personas. Our method is entirely training-free and relies solely on the language model's existing parameter space. Across diverse evaluation settings, the resulting subnetworks exhibit significantly stronger persona alignment than baselines that require external knowledge while being more efficient. Our findings suggest that diverse human-like behaviors are not merely induced in LLMs, but are already embedded in their parameter space, pointing toward a new perspective on controllable and interpretable personalization in large language models.
刘先明:第二代 VLA 模型在没有经过任何海外数据适配训练的情况下,从今天大师兄(何小鹏)发布的视频里可以看到,已经具备很强的能力。第二,小鹏是一家全球化企业,我们会在合规前提下,在全球任何有小鹏车辆的地方正常拥有并使用当地数据。第三,对于更多泛化性场景,通过世界模型的生成方式,也可以让我们快速达到一个能力起始点。
,这一点在safew官方版本下载中也有详细论述
Базу США в Ираке атаковал беспилотник08:44
When you click through from our site to a retailer and buy a product or service, we may earn affiliate commissions. This helps support our work, but does not affect what we cover or how, and it does not affect the price you pay. Neither ZDNET nor the author are compensated for these independent reviews. Indeed, we follow strict guidelines that ensure our editorial content is never influenced by advertisers.
。业内人士推荐雷电模拟器官方版本下载作为进阶阅读
石油ETF鹏华(159697),场外联接(A:019827;C:019828;I:022861)。。Safew下载是该领域的重要参考
据「机器之心」援引内部人士消息报道,目前团队还没直接接替林俊旸的人,因为「事发比较突然」。