AI 论文日报(2026-03-15)

Published:

English version: /paper-news/2026-03-15/

运行统计

  • 候选论文: 437
  • 入选论文: 30
  • 已精读完成: 30
  • 时间窗口 (UTC): 2026-03-13T00:00:00Z → 2026-03-14T00:00:00Z (weekend_backlog_sat, expanded=0)
展开查看用于总结的论文列表
arXiv ID标题 / 链接分类评分入选理由标签
2603.12023Cascade: Composing Software-Hardware Attack Gadgets for Adversarial Threat Amplification in Compound AI Systems
PDF
cs.CR, cs.AI93Shows how classic CVEs + hardware side-channels can amplify attacks in compound LLM toolchainsagent-security, compound-systems, CVE, side-channels, threat-modeling, tool-use
2603.12094Human-Centred LLM Privacy Audits: Findings and Frictions
PDF
cs.HC, cs.AI, cs.CL, cs.CY93Human-centered privacy auditing tool + large user studies; measures name-conditioned personal inference risks.privacy, LLM auditing, PII, user study, measurement, deployment
2603.11768Governing Evolving Memory in LLM Agents: Risks, Mechanisms, and the Stability and Safety Governed Memory (SSGM) Framework
PDF
cs.AI90Targets long-term agent memory risks (drift, corruption, privacy) with a governance framework (SSGM)agents, memory, governance, privacy, safety, robustness
2603.09641PRECEPT: Planning Resilience via Experience, Context Engineering & Probing Trajectories A Unified Framework for Test-Time Adaptation with Compositional Rule Learning and Pareto-Guided Prompt Evolution
PDF
cs.AI, cs.IR90Agent memory safety: exact rule retrieval + conflict-aware reliability + invalidation; tackles stale/adversarial knowledge.agents, memory, robustness, test-time adaptation, prompt evolution, knowledge integrity
2603.09692ActiveUltraFeedback: Efficient Preference Data Generation using Active Learning
PDF
cs.LG, cs.AI, cs.CL90Active learning cuts RLHF preference-label cost; new pair-selection methods with strong empirical gains.RLHF, preference-data, active-learning, uncertainty, alignment, data-efficiency
2603.09803Good Reasoning Makes Good Demonstrations: Implicit Reasoning Quality Supervision via In-Context Reinforcement Learning
PDF
cs.LG89RLVR variant that upweights high-quality reasoning traces via in-context utility signal (Evidence Gain).reasoning, RLVR, post-training, trace-quality, alignment, reward-weighting
2603.09821One-Eval: An Agentic System for Automated and Traceable LLM Evaluation
PDF
cs.CL88Agentic, traceable evaluation workflows from NL requests; improves reproducibility and auditabilityevaluation, agentic-eval, reproducibility, benchmarking, tooling, auditing
2603.12056XSkill: Continual Learning from Experience and Skills in Multimodal Agents
PDF
cs.AI, cs.CL88Continual improvement for multimodal agents via experience+skill memory without finetuning.agents, multimodal, continual-learning, tool-use, memory, retrieval
2603.09403LLM as a Meta-Judge: Synthetic Data for NLP Evaluation Metric Validation
PDF
cs.CL86Scalable metric validation via LLM-generated synthetic degradations; high meta-correlation to human rankings.evaluation, metrics, synthetic data, LLMs, multilingual, benchmarking
2603.09044Synergistic Directed Execution and LLM-Driven Analysis for Zero-Day AI-Generated Malware Detection
PDF
cs.CR, cs.SE86Targets LLM-enabled zero-day malware; hybrid concolic+LLM analysis with formal guarantees.security, malware, LLM-misuse, program-analysis, concolic-execution, robust-detection
2603.11687SemBench: A Universal Semantic Framework for LLM Evaluation
PDF
cs.CL, cs.AI86Auto-generates semantic eval benchmarks from dictionaries; scalable, multilingual LLM understanding testsLLM evaluation, semantics, benchmark generation, multilingual, WiC
2603.11413Evaluation format, not model capability, drives triage failure in the assessment of consumer health AI
PDF
cs.HC, cs.AI86Safety-relevant: shows triage failures depend heavily on eval format; naturalistic protocols change results.evaluation, healthcare, safety, triage, protocol-design, human-factors
2603.09154Bioalignment: Measuring and Improving LLM Disposition Toward Biological Systems for AI Safety
PDF
cs.CL84Safety-relevant bias eval + tuning: measures LLM disposition toward bio vs synthetic solutionsalignment, bias, eval, biosecurity, preference-tuning, safety-metrics
2603.11864Social, Legal, Ethical, Empathetic and Cultural Norm Operationalisation for AI Agents
PDF
cs.AI, cs.SE84Process to operationalize social/legal/ethical norms into verifiable agent requirements; surveys tools and gaps.AI governance, agent norms, requirements, verification, ethics, safety engineering
2603.11915CoMMET: To What Extent Can LLMs Perform Theory of Mind Tasks?
PDF
cs.CL84New multimodal, multi-turn Theory-of-Mind benchmark; useful for social reasoning evals.evaluation, benchmark, theory-of-mind, multimodal, multi-turn, llm-evals
2603.11665Multi-Task Reinforcement Learning for Enhanced Multimodal LLM-as-a-Judge
PDF
cs.CL84Multi-task RL to improve MLLM-as-judge consistency and human correlation; relevant to eval reliabilityLLM-as-judge, evaluation, reinforcement learning, multimodal, preference modeling
2603.09835Chow-Liu Ordering for Long-Context Reasoning in Chain-of-Agents
PDF
cs.CL84Improves long-context agent pipelines by optimizing chunk order to reduce bounded-memory information loss.long-context, agents, chain-of-agents, memory-bottleneck, ordering, reasoning
2603.11799Exponential-Family Membership Inference: From LiRA and RMIA to BaVarIA
PDF
cs.LG, cs.CR83Unifies LiRA/RMIA/BASE MIAs; practical for privacy auditing of ML/LLMs via a single frameworkprivacy, membership-inference, auditing, security, theory, evaluation
2603.09052From Days to Minutes: An Autonomous AI Agent Achieves Reliable Clinical Triage in Remote Patient Monitoring
PDF
cs.AI, cs.CL, cs.LG83Autonomous clinical triage agent w/ tools and clinician validation; real-world agent reliability.agents, tool-use, evaluation, healthcare, reliability, MCP
2603.12142Understanding Disclosure Risk in Differential Privacy with Applications to Noise Calibration and Auditing (Extended Version)
PDF
cs.CR, cs.IT82New DP disclosure-risk metric spanning membership/attribute/reconstruction; helps calibration & auditsdifferential-privacy, auditing, noise-calibration, inference-attacks, privacy-metrics
2603.09192Explainable Innovation Engine: Dual-Tree Agent-RAG with Methods-as-Nodes and Verifiable Write-Back
PDF
cs.AI82Agent-RAG with auditable method provenance + verifier write-back; improves controllability and traceability.RAG, agents, provenance, auditing, verification, knowledge base
2603.11838DatedGPT: Preventing Lookahead Bias in Large Language Models with Time-Aware Pretraining
PDF
cs.CL, q-fin.GN82Time-aware pretraining to prevent lookahead bias; strong methodology for leakage control.data-contamination, temporal-generalization, evaluation, pretraining, finance
2603.09344Robust Regularized Policy Iteration under Transition Uncertainty
PDF
cs.AI, stat.ML82Robust offline RL via worst-case transition uncertainty; targets distribution shift failures.offline-rl, robust-rl, distribution-shift, uncertainty, policy-iteration, safety
2603.11545One Supervisor, Many Modalities: Adaptive Tool Orchestration for Autonomous Queries
PDF
cs.CL, cs.AI, cs.LG82Agentic tool orchestration across modalities with learned routing; strong efficiency metrics on 2,847 queries.agents, tool-use, orchestration, routing, multimodal, systems
2603.09454ShapeMark: Robust and Diversity-Preserving Watermarking for Diffusion Models
PDF
cs.CR80Diffusion watermarking that preserves diversity while improving robustness; provenance angle.watermarking, diffusion-models, provenance, robustness, content-authenticity, security
2603.12089EmbTracker: Traceable Black-box Watermarking for Federated Language Models
PDF
cs.CR79Client-traceable black-box watermarking for federated LMs; addresses model leakage accountabilitywatermarking, federated-learning, model-leakage, accountability, backdoors, security
2603.09909MedMASLab: A Unified Orchestration Framework for Benchmarking Multimodal Medical Multi-Agent Systems
PDF
cs.AI79Unified benchmark/orchestration for multimodal medical multi-agent systems; standardizes eval.multi-agent, benchmark, multimodal, evaluation, healthcare, orchestration
2603.11721When OpenClaw Meets Hospital: Toward an Agentic Operating System for Dynamic Clinical Workflows
PDF
cs.AI78Proposes restricted-execution, doc-centric agent OS for hospitals; focuses on reliability/security needsagents, healthcare, sandboxing, permissions, deployment, reliability
2603.09152DataFactory: Collaborative Multi-Agent Framework for Advanced Table Question Answering
PDF
cs.AI, cs.DB, cs.IR78Multi-agent TableQA targeting context limits and hallucinations via coordinated DB/KG teams (claims; need details).multi-agent, TableQA, hallucinations, tool use, structured data, ReAct
2603.11689Explicit Logic Channel for Validation and Enhancement of MLLMs on Zero-Shot Tasks
PDF
cs.AI78Adds explicit logic/probabilistic reasoning channel to validate/enhance black-box MLLMs.multimodal-llm, reasoning, verification, probabilistic-inference, zero-shot, reliability

AI 论文洞察简报

2026-03-15

0) 执行要点(先读这个)

  • “治理 + 结构”正在成为脆弱的智能体记忆与 RAG 的解药:多篇论文用结构化、可审计的记忆(清单、溯源树、精确匹配键)替代扁平向量检索,并加入显式门控/验证以防止漂移、投毒与误差累积。
  • 评估正越来越多地被当作一等系统问题(而不只是指标):智能体化评估规划器、语义裁判与合成验证协议,旨在让评估可追溯、对格式更鲁棒且更便宜——并且一项健康分诊复现实验显示,仅输出格式就能制造“失败”。
  • 安全威胁模型正从“LLM 越狱”扩展到跨栈现实:关于复合 AI 流水线的工作表明,组合式软硬件攻击链(如比特翻转)可绕过护栏;同时,恶意软件防御正走向LLM 引导的共符号(concolic)探索并带形式化保证。
  • 鲁棒性正在通过不确定性与最坏情况优化被工程化落地:离线 RL 通过可处理的 KL 正则鲁棒 Bellman 算子来应对转移不确定性;偏好数据生成则用不确定性主动选择信息量更高的比较对。
  • 生成模型的溯源与 IP 保护正在成熟:扩散水印从脆弱的数值编码转向结构性置换编码并保持多样性;联邦语言模型获得可追踪客户端的黑盒水印(通过嵌入空间触发器)。

2) 关键主题(聚类)

主题:面向智能体的受治理长期记忆(漂移/投毒/可审计性)

主题:结构化 / 多智能体 RAG 以实现可控综合(并减少幻觉)

主题:评估基础设施与“格式现实主义”(裁判、合成验证、编排)

主题:跨栈安全、隐私审计与溯源/水印

主题:鲁棒优化与样本高效的对齐信号(不确定性、内在质量)

  • 重要性:鲁棒性与对齐正转向更有原则的目标(最坏情况动力学、不确定性驱动的数据收集、内在推理质量信号),以在不大量人工标注的情况下减少脆弱性。
  • 代表论文
  • 共同方法
    • 不确定性估计(转移集成;认知/回报的认知不确定性)驱动保守规划或主动采样。
    • 隐式信号替代昂贵的逐步标签(通过 in-context 学习效用的 Evidence Gain)。
    • 给出理论算子/恒等式来支撑训练目标(鲁棒 Bellman 收缩;用于回报重加权的贝叶斯恒等式)。
  • 开放问题 / 失效模式
    • 计算强度高(ActiveUltraFeedback 报告约 20 万 GPU 小时;鲁棒 RL 需要集成)。
    • 超出已测领域的泛化性(Evidence Gain 在数学上展示;鲁棒 RL 的不确定集合近似)。
    • 偏好流水线对“标注”的 LLM 裁判依赖。

3) 技术综合

  • 结构化记忆正在收敛到“仅追加溯源 + 可变工作集”:AOS-H 使用仅追加的文档变更轨迹;SSGM 提出不可变的情节账本 + 可变的活跃图,并周期性对账。
  • “门控(gating)”是跨领域的共同安全原语:记忆写入验证(SSGM)、禁用集合剪枝(PRECEPT)、可验证写回阈值(Agent-RAG)、黑盒验证阈值(EmbTracker VR>γ;ShapeMark 校准 FPR)。
  • 有界上下文推理正在从排序/编排层被攻克:CL–ORDER 在内存瓶颈下优化 chunk 顺序;DataFactory 与 Supervisor 风格系统将任务路由到专门模块/工具以避免单体上下文过载。
  • 评估正从标量分数走向可追溯流水线:One-Eval 的 BenchInfo 工件与 MedMASLab 的统一 I/O + 账本,呼应了记忆治理论文中的可审计目标。
  • 语义裁判正在替代脆弱的精确匹配指标:MedMASLab 的 VLM-SJ 与分诊复现的裁决流水线强调“格式”可主导测得性能。
  • 通过最坏情况选择实现鲁棒性在 RL 与安全中同时出现:RRPI 在备份中选择最坏情况集成动力学;Cascade 组合最坏情况 gadget 链;CogniCrypt 通过 LLM 评分优先最坏情况(最恶意)路径。
  • 贝叶斯思维正作为稳定器回归:PRECEPT 使用 Beta 先验 + Thompson 采样评估来源可靠性;BaVarIA 用 NIG 收缩估计成员推断中的方差;RAD 在 DP 中用辅助知识形式化优势。
  • 溯源/可追踪性正在被工程化进生成系统:ShapeMark 在保持多样性的同时实现极低 FPR 检测;EmbTracker 在联邦场景加入客户端级归因。
  • 时间作为鲁棒性的一等轴:DatedGPT 训练按年份截断的模型族,并用困惑度反转探测时间泄露;SSGM 在读过滤中使用新鲜度衰减。
  • 临床智能体工作分化为“智能体性能”与“智能体基础设施”:Sentinel 展示带工具检索的回顾性分诊性能;AOS-H 聚焦 OS 级约束与可审计工作流(未报告实证结果)。

4) Top 5 论文(含“为何是现在”)

1) Cascade: Composing Software-Hardware Attack Gadgets for Adversarial Threat Amplification in Compound AI Systems

  • 展示可绕过复合流水线护栏的跨层攻击链(算法 + CVE + 硬件原语)。
  • 报告在比特翻转策略下的护栏规避率(如表 1 中 82%/72%/94%)以及80% 越狱成功率(长运行时间)。
  • 现在重要:真实部署越来越多是复合系统,仅做模型级红队会漏掉关键路径。
  • 需保持怀疑:可行性假设(同机部署/故障注入控制;向专有栈迁移未充分展示)。

2) Synergistic Directed Execution and LLM-Driven Analysis for Zero-Day AI-Generated Malware Detection

  • LLM 引导的路径优先级 + 共符号执行 + Transformer 分类器 + RL 精炼结合用于 AI 生成恶意软件。
  • 在 2,500 样本的 AI-Gen-Malware 数据集上报告97.5% 准确率,并在达到 95% 覆盖率时相较 DFS 路径数减少 73.2%
  • 为何是现在:LLM 使恶意软件更具多态性与触发式行为;该方法针对零日 + 规避模式。
  • 需保持怀疑:保证依赖分类器正确性与 LLM 排序(相对完备性要求恶意路径在 top‑B);硬件需求重。

3) ShapeMark: Robust and Diversity-Preserving Watermarking for Diffusion Models

  • 引入结构性置换编码(SE)与负载去偏随机化(PDSR)以保持多样性。
  • 报告在 FPR 1e‑6 下 TPR≈1.000(干净)/0.999(攻击后),并有强的逐比特恢复(攻击后 0.987),以及高 LPIPS 多样性。
  • 为何是现在:溯源需求上升;以往 NaW 方案在鲁棒性与多样性间权衡。
  • 需保持怀疑:依赖反演质量与通过尾部外推进行校准;对自适应伪造/移除的评估有限。

4) ActiveUltraFeedback: Efficient Preference Data Generation using Active Learning

  • 将偏好收集视为带不确定性(ENN)的主动选择问题,并提出 DRTS/DELTAUCB 采集策略。
  • 报告样本效率(正文:约 1/3 数据即可匹配/超过;摘要称 1/6)覆盖奖励建模与 DPO/IPO/SimPO。
  • 为何是现在:偏好数据是主要扩展瓶颈;主动采集是直接杠杆。
  • 需保持怀疑:依赖 LLM 裁判做标注且计算量极高(约 20 万 GPU 小时)。

5) Evaluation format, not model capability, drives triage failure in the assessment of consumer health AI

  • 机制性复现显示自然式自由文本提示相较考试脚手架提升分诊准确率(+6.4pp,p=0.015)。
  • 将主导失效机制归因于强制 A/B/C/D 离散化(如 GPT‑5.2 哮喘 16% vs 100%)。
  • 为何是现在:健康 AI 监管与公共叙事依赖评估结果;该工作展示协议敏感性
  • 需保持怀疑:仅 17 个情景的小题库;通过 LLM 裁判进行裁决;未直接测试已部署的 ChatGPT Health 产品。

5) 实用下一步

  • 面向智能体记忆安全:原型化治理层:(a) 读时新鲜度衰减 + ACL 过滤;(b) 写入时对受保护核心事实做矛盾检查;在长时程任务上用周期性对账测量漂移(SSGM 风格)。
  • 面向 RAG/智能体可靠性:在向量相似度之前加入“结构化检索”路径(精确匹配键或清单/溯源导航);跟踪两条路径各自的使用时机与错误模式(PRECEPT/AOS-H 模式)。
  • 面向评估流水线:做 A/B 测试,仅改变输出格式约束(强制选择 vs 自由文本;正则 vs 语义裁判),量化格式诱发的失败(分诊复现 + MedMASLab 教训)。
  • 面向复合系统安全:将红队扩展到提示之外——盘点软件 CVE、工具链依赖与硬件假设;尝试 gadget-chain 演练(Cascade 框架),并记录最先在哪一层失效。
  • 面向溯源:若部署扩散生成,在真实后处理流水线(压缩、缩放)下测试结构水印,并在目标 FPR 上验证校准;另对联邦微调,评估通过触发查询实现的黑盒可追踪性(EmbTracker)。
  • 面向对齐数据效率:用不确定性驱动的采集替代静态配对采样;在固定标注预算下比较下游 RM 与 DPO 表现(ActiveUltraFeedback)。
  • 面向鲁棒决策:在离线 RL 或模型不确定下的智能体规划中,测试最坏情况集成选择 vs 平均模型规划,并监控高不确定区域的 Q 值是否下降(RRPI 诊断)。

由逐篇论文分析生成;无外部浏览。