AI 论文日报(2026-03-27)

Published:

English version: /paper-news/2026-03-27/

运行统计

  • 候选论文: 216
  • 入选论文: 30
  • 已精读完成: 30
  • 时间窗口 (UTC): 2026-03-25T00:00:00Z → 2026-03-26T00:00:00Z (arxiv_announce, expanded=0)
展开查看用于总结的论文列表
arXiv ID标题 / 链接分类评分入选理由标签
2603.24511Claudini: Autoresearch Discovers State-of-the-Art Adversarial Attack Algorithms for LLMs
PDF
cs.LG, cs.AI, cs.CR96Autonomous autoresearch finds stronger jailbreak/prompt-injection attack algorithms; big eval gains vs 30+ baselinesagentic-research, jailbreaks, prompt-injection, adversarial-attacks, red-teaming, white-box, evaluation
2603.23801AgentRFC: Security Design Principles and Conformance Testing for Agent Protocols
PDF
cs.CR94Security principles + protocol stack + conformance tests for agent protocols (MCP/A2A/etc); formal invariants (TLA+)agent-security, protocols, MCP, formal-methods, TLA+, conformance-testing, security-principles
2603.24080LLMpedia: A Transparent Framework to Materialize an LLM's Encyclopedic Knowledge at Scale
PDF
cs.CL, cs.DB94Scales factuality auditing via 1M generated articles; shows big gap vs MMLU-style benchmarksfactuality, evaluation, parametric-knowledge, benchmarking, hallucinations, knowledge-auditing
2603.24203Invisible Threats from Model Context Protocol: Generating Stealthy Injection Payload via Tree-based Adaptive Search
PDF
cs.CR, cs.AI92Black-box stealthy indirect prompt injection for MCP tool responses; adaptive search to bypass defensesprompt-injection, tool-security, MCP, black-box-attacks, agent-security, adversarial-search
2603.23806Willful Disobedience: Automatically Detecting Failures in Agentic Traces
PDF
cs.SE, cs.AI92Automated compliance checking of agentic traces; catches procedural/unsafe failures beyond outcomesagents, trace-evaluation, oversight, specification, tool-use, safety-monitoring, auditing
2603.24533UI-Voyager: A Self-Evolving GUI Agent Learning via Failed Experience
PDF
cs.LG, cs.AI, cs.CV92Self-evolving GUI agent w/ RFT+step-level distillation from failures; strong agentic reliability signal.agents, GUI-agents, self-improvement, rejection-finetuning, distillation, long-horizon, evaluation
2603.24079When Understanding Becomes a Risk: Authenticity and Safety Risks in the Emerging Image Generation Paradigm
PDF
cs.CV, cs.AI, cs.CR90Finds MLLM image generators produce more unsafe/fake images than diffusion; important new risk surfacemultimodal, image-generation, safety, unsafe-content, misinformation, evaluation
2603.23844Language Model Planners do not Scale, but do Formalizers?
PDF
cs.CL90Shows LLM formalizers scale on planning via solver programs; key for agent planning + verification.planning, program-synthesis, formalization, LLM-reasoning, solver, scaling, BlocksWorld
2603.24414ClawKeeper: Comprehensive Safety Protection for OpenClaw Agents Through Skills, Plugins, and Watchers
PDF
cs.CR, cs.AI89Holistic runtime security for tool-using agents (skills/plugins/watchers) targeting leakage/escalation risksagent-runtime, sandboxing, permissions, tool-use, plugins, security-framework, data-leakage
2603.24329GameplayQA: A Benchmarking Framework for Decision-Dense POV-Synced Multi-Video Understanding of 3D Virtual Agents
PDF
cs.CL, cs.AI, cs.CV88Decision-dense POV-synced multi-video benchmark for agent perception/reasoning in 3D multi-agent playbenchmark, multimodal, video-understanding, agents, multi-agent, evaluation, POV
2603.23934Revealing Multi-View Hallucination in Large Vision-Language Models
PDF
cs.CV, cs.AI88MVH-Bench exposes multi-view VLM hallucinations; adds benchmark + training-free decoding mitigation.VLM, hallucination, benchmark, multiview, evaluation, decoding, robustness
2603.24543Analysing the Safety Pitfalls of Steering Vectors
PDF
cs.CR, cs.CL87Safety audit shows steering vectors can sharply raise/lower jailbreak ASR; highlights activation-steering risk surfaceactivation-steering, CAA, jailbreaks, robustness, model-editing, safety-evaluation
2603.24124The Alignment Tax: Response Homogenization in Aligned LLMs and Its Implications for Uncertainty Estimation
PDF
cs.LG, cs.AI, cs.CL86Finds RLHF/DPO causes response homogenization that breaks sampling-based uncertainty; important reliability insightRLHF, DPO, uncertainty, calibration, reliability, TruthfulQA, evaluation
2603.23909DUPLEX: Agentic Dual-System Planning via LLM-Driven Information Extraction
PDF
cs.AI86Neuro-symbolic planning that confines LLM to schema-guided extraction to reduce hallucinated plansagents, planning, neuro-symbolic, PDDL, reliability, hallucination-mitigation, robotics
2603.23841PoliticsBench: Benchmarking Political Values in Large Language Models with Multi-Turn Roleplay
PDF
cs.CL, cs.AI86New multi-turn roleplay benchmark to measure political values/bias drift across major LLMsbenchmark, bias, politics, multi-turn, evaluation, roleplay
2603.23867Can VLMs Reason Robustly? A Neuro-Symbolic Investigation
PDF
cs.LG, cs.AI, cs.CV86Finds VLM reasoning brittle under covariate shift; neuro-symbolic angle for robust generalization.VLM, robustness, distribution-shift, neuro-symbolic, reasoning, evaluation
2603.23848BeliefShift: Benchmarking Temporal Belief Consistency and Opinion Drift in LLM Agents
PDF
cs.CL, cs.CY84Longitudinal benchmark for belief consistency/drift and evidence-driven revision in multi-session LLM agentsbenchmarks, agent-memory, longitudinal-eval, belief-dynamics, consistency, over-alignment
2603.24582The Stochastic Gap: A Markovian Framework for Pre-Deployment Reliability and Oversight-Cost Auditing in Agentic Artificial Intelligence
PDF
cs.AI84Markov framework to audit reliability vs oversight cost for stochastic agent workflows pre-deploymentagentic-systems, reliability, oversight, governance, risk-metrics, workflow-auditing
2603.23996Forensic Implications of Localized AI: Artifact Analysis of Ollama, LM Studio, and llama.cpp
PDF
cs.CR84Forensic artifact study of local LLM runners; relevant to auditing, incident response, and misuse.security, forensics, local-LLMs, ollama, llama.cpp, LM-Studio, auditability
2603.24440CUA-Suite: Massive Human-annotated Video Demonstrations for Computer-Use Agents
PDF
cs.LG, cs.AI, cs.CV83Large human-annotated continuous video demos for computer-use agents; likely high leverage dataset for agent trainingcomputer-use-agents, datasets, demonstrations, video, tool-use, agent-training, evaluation
2603.24125Alignment Reduces Expressed but Not Encoded Gender Bias: A Unified Framework and Study
PDF
cs.CL83Finds alignment reduces expressed but not encoded gender bias; unified intrinsic/extrinsic analysisbias, alignment, representation, fairness, evaluation, interpretability
2603.23990From Untamed Black Box to Interpretable Pedagogical Orchestration: The Ensemble of Specialized LLMs Architecture for Adaptive Tutoring
PDF
cs.CY, cs.AI82Interpretable orchestrator + specialized LLMs for tutoring; improves controllability and constraint adherenceagent-architecture, controllability, education, orchestration, reliability, governance
2603.23878The Luna Bound Propagator for Formal Analysis of Neural Networks
PDF
cs.LG, cs.AI, cs.LO82C++ alpha-CROWN bound propagator for NN verification; improves deployability of formal methods.verification, alpha-CROWN, robustness, formal-methods, tooling, C++
2603.24580Retrieval Improvements Do Not Guarantee Better Answers: A Study of RAG for AI Policy QA
PDF
cs.CL, cs.AI, cs.CY, cs.IR, cs.LG81Shows better retrieval may not improve answers in AI policy QA; domain RAG study on AGORA with preferences/DPORAG, evaluation, AI-policy, retrieval, grounding, preference-learning, DPO
2603.24586Comparing Developer and LLM Biases in Code Evaluation
PDF
cs.SE, cs.CL81TRACE measures LLM-judge vs developer preference gaps; extracts rubric biases across coding settingsevaluation, LLM-judges, human-preferences, bias, code, rubrics, reliability
2603.23889Off-Policy Safe Reinforcement Learning with Constrained Optimistic Exploration
PDF
cs.LG, cs.RO80Off-policy safe RL with constrained optimistic exploration; targets constraint violations directly.safe-RL, constraints, off-policy, exploration, robotics, reliability
2603.23853SCoOP: Semantic Consistent Opinion Pooling for Uncertainty Quantification in Multiple Vision-Language Model Systems
PDF
cs.AI, cs.MA79Training-free uncertainty pooling across multiple VLMs improves hallucination detection/abstentionuncertainty, hallucination-detection, VLM, ensembles, abstention, calibration
2603.23840VehicleMemBench: An Executable Benchmark for Multi-User Long-Term Memory in In-Vehicle Agents
PDF
cs.AI, cs.CL78Executable benchmark for multi-user long-term memory + tool interaction in in-vehicle agents; tests conflicts over timebenchmarks, agent-memory, long-context, tool-use, multi-user, simulation, reliability
2603.24282Software Supply Chain Smells: Lightweight Analysis for Secure Dependency Management
PDF
cs.SE, cs.CR78Lightweight tool to detect supply-chain security 'smells' in Maven/NPM; practical security signalsecurity, software-supply-chain, dependency-management, tooling, risk-detection
2603.24518TuneShift-KD: Knowledge Distillation and Transfer for Fine-tuned Models
PDF
cs.LG78Transfers specialized knowledge across models via few-example distillation when data unavailable.knowledge-distillation, model-transfer, fine-tuning, LoRA, data-privacy, LLMs

AI 论文洞察简报

2026-03-27

0) 核心要点(先读这个)

  • Agent 的“控制平面(control planes)”正在成为新的安全边界——而“组合(composition)”是最脆弱的点。 对 Agent 协议进行形式化一致性检查发现大量规范/实现不一致,且组合会破坏在孤立情况下成立的性质(在组合模型中 20/21 个组合安全不变量被违反)。
  • 仅看结果的评估对 Agent 越来越具有误导性。 轨迹级合规检查发现,即便“结果完美”的轨迹也可能隐藏流程性违规(例如:Claude 中 τ2 奖励完美的轨迹里仍有 83% 至少出现一次违规)。
  • 长时程可靠性的瓶颈正从“工具执行”转向“状态/记忆正确性”。 在一个可执行的车载基准中,错误主要由记忆构建/检索主导(63.9% 的错误),且从 gold-memory 到 autonomous-memory 设置出现显著性能下降。
  • 在规划上,“LLM 作为规划器”仍不具备可扩展性;“LLM 作为形式化器/抽取器”则可扩展。 多篇论文一致表明,将 LLM 约束为信息抽取/形式化,并把搜索/验证交给符号求解器,可在大规模 BlocksWorld 与 IPC/家务规划上显著提升成功率。
  • 对齐(alignment)干预可能削弱安全仪表(safety instrumentation)。 出现两类不同的“对齐副作用”:(i) 回复同质化会破坏基于采样的不确定性估计(高单簇率),(ii) 激活引导向量(activation steering vectors)会因与拒答方向(refusal directions)的重叠而显著提高越狱成功率。
  • 多模态安全正在经历范式转移。 基于 MLLM 的图像生成器产生不安全图像的比例高于扩散模型基线,且更难被现有检测器识别,除非检测器在包含新范式的数据上重新训练;多视角 LVLM 还呈现一种独特的幻觉模式,并有一种解码时缓解方法。

2) 关键主题(聚类)

主题:Agent 协议安全与供应链式注入

主题:轨迹级审计优于仅结果评分的 Agent 评估

主题:记忆与纵向信念动态成为下一条可靠性前沿

  • 重要性:持久化 Agent 必须跨会话与跨用户管理不断演化的偏好/信念;失败表现为漂移、矛盾处理不当或状态更新错误。
  • 代表论文
  • 共同方法
    • 长时程、多会话轨迹与客观指标(基于状态的评估;信念状态向量)。
    • 区分 “gold memory” 与自主记忆构建,以隔离瓶颈。
    • 衡量稳定性–适应性权衡(证据驱动修订 vs 抗漂移)。
  • 开放问题 / 失效模式
    • 检索有助于回忆,但未必能防止模型诱导的推动(nudging)(RAG 提升修订/CRR,但几乎不改变漂移一致性)。
    • 难点:可执行环境中的条件约束与多用户冲突消解。
    • 将矛盾消解评估扩展到超越人工评分的规模。

主题:将 LLM 约束为形式化/抽取;让求解器验证/搜索

  • 重要性:端到端 LLM 规划在组合复杂度上会急剧退化;“形式化 + 求解器”流水线提升可扩展性与可验证性。
  • 代表论文
  • 共同方法
    • 用 LLM 做模式引导的信息抽取或翻译为求解器友好表示(如 PDDL)。
    • 加入确定性映射/校验层,避免脆弱的代码生成。
    • 使用由求解器诊断触发的反思/修复循环。
  • 开放问题 / 失效模式
    • 领域依赖(结果集中在 BlocksWorld / PDDL 编写领域)。
    • “解卷(unraveling)”压缩:自然语言描述展开为巨大的形式化规格;更高阶生成器方法有帮助但需更广验证。
    • 在实时具身场景中,迭代修复循环的延迟问题。

主题:多模态幻觉与不确定性——基准 + 免训练缓解

主题:对齐副作用:不确定性坍塌、引导风险与“编码 vs 表达”偏差

3) 技术综合

  • 形式化不变量 + 可执行回放正在成为实用的安全工作流:AgentRFC 将散文规范 → 类型化 IR → TLA+ 不变量 → 反例轨迹 → SDK 级测试串联起来,把“论文安全”连接到实现层失败。
  • “组合”是协议与 Agent 的共同阿喀琉斯之踵:协议桥接会破坏不变量;Agent 系统组合工具/记忆/评审器时,局部正确不意味着全局安全。
  • “约束 LLM”是跨领域模式:DUPLEX 将 LLM 限定为模式引导 IE;BlocksWorld 工作采用 LLM 作为形式化器/高阶生成器;VLC 将感知与精确符号执行解耦。
  • 评估正从单次输出转向轨迹与分布:AgentPex 进行多评审合规评分;BeliefShift 评估信念状态序列;SCoOP 从采样输出构建系统级分布。
  • RAG 有助于回忆但未必“抗漂移”:BeliefShift 显示 RAG 改善信念修订与矛盾处理,但几乎不改变漂移一致性;policy RAG 显示检索指标提升不保证答案更好。
  • 免训练、推理时干预正在走强:RSCD(注意力掩码对比解码)与 SCoOP(熵加权池化)无需重训即可提升鲁棒性,但依赖架构假设与采样成本。
  • 对齐可能削弱常见安全启发式:回复同质化破坏语义熵/自一致性;激活引导会因与拒答方向的几何重叠而提高越狱 ASR。
  • 基准正变得更可执行、更基于状态(VehicleMemBench),也更具诊断性(GameplayQA、MVH-Bench 通过干扰项与成对设计),减少对主观评审的依赖。
  • 自动化“研究 Agent”已成为安全因素:Claudini 表明 LLM 驱动的算法搜索能实质提升白盒越狱优化器,抬高防御评估门槛。

4) Top 5 论文(含“为何是现在”)

1) AgentRFC: Security Design Principles and Conformance Testing for Agent Protocols

  • 提供 6 层 Agent 协议栈(Agent Protocol Stack),使 MCP/A2A/ANP/ACP 的协议完备性更明确。
  • 11 条与 agent 无关的安全原则定义为 TLA+ 不变量,并用分类法区分规范强制 vs 加固项。
  • 交付端到端 spec→IR→TLA+→反例→SDK 测试流水线;发现 33 个规范级违规,并通过 42 个测试确认 实现级违规
  • 为何是现在:Agent 协议正在快速部署并被组合;论文显示组合会破坏性质(20/21 个 CS 不变量违规)。
  • 持怀疑态度:有界模型检查(小边界)与手工规范条款抽取。

2) Willful Disobedience: Automatically Detecting Failures in Agentic Traces

  • 提出 AgentPex:从提示/工具 schema 中显式抽取规则 + 多评审轨迹审计,并采用 gated-min 聚合。
  • 表明仅结果成功会掩盖失败:Claude 中 83% 的“完美奖励”轨迹仍存在流程性违规。
  • 揭示模型特定的流程失败模式(如同时文本+工具调用违规)。
  • 为何是现在:生产 Agent 产生海量轨迹;自动化流程审计正变得必要。
  • 持怀疑态度:成本(每条轨迹多次 LLM 调用)与基准/领域泛化性(聚焦 τ2-bench)。

3) VehicleMemBench: An Executable Benchmark for Multi-User Long-Term Memory in In-Vehicle Agents

  • 可执行模拟器 + 基于状态的评估,用于多用户偏好演化,包含 111 个 API
  • 量化 gold-memory → autonomous-memory 的性能下降(例如:某强模型在一种记忆方法下 ESM 从 90.60 → 64.80)。
  • 发现 记忆错误占主导(63.9% 的错误),而非工具执行。
  • 为何是现在:记忆是长时程 Agent 的瓶颈;该基准提供客观测试框架。
  • 持怀疑态度:场景范围与向更长/更复杂真实驾驶语境的扩展。

4) Language Model Planners do not Scale, but do Formalizers?

  • 受控扩展研究:LLM 作为规划器在约 30 个方块时退化到 ~20%,而形式化器扩展性更好(例如某模型在 100 个方块内保持 100%)。
  • 实用技术:分治形式化高阶形式化器(输出生成器程序以应对“解卷”输入)。
  • 为何是现在:规划是 Agent 的核心;该工作澄清 LLM 在系统栈中的合适位置。
  • 持怀疑态度:领域较窄(BlocksWorld)与单次运行评估/求解器崩溃处理。

5) The Alignment Tax: Response Homogenization in Aligned LLMs and Its Implications for Uncertainty Estimation

  • 诊断 回复同质化(高单簇率),使基于采样的不确定性在许多查询上坍塌。
  • 通过分阶段消融与 base-vs-instruct 对比,将大量效应归因于 偏好优化(DPO)
  • 提出 UCBD:从最便宜信号开始的级联,用廉价信号(token 熵)解决许多查询,并选择性升级。
  • 为何是现在:许多安全栈依赖自一致性/语义熵;该工作展示其结构性失效情形。
  • 持怀疑态度:仅覆盖开放的 3B–14B 系列且评审标签不完美;级联缺乏形式化保证。

5) 实用的下一步

  • 如果你在交付 Agent 协议:采用 APS 风格清单,并将组合测试(桥/代理场景)作为一等 CI 工件;把“孤立成立”视为不充分。
  • 针对 MCP/工具生态:假设结构化工具输出是注入通道;加入溯源、同意(consent)强制与审计完备性检查,并针对自适应载荷生成(TIP 类)进行测试。
  • 针对 Agent 评估:在结果指标之外加入轨迹级合规评分(显式规则抽取 + 禁用边 + 参数检查);跟踪“成功结果中的流程违规率”。
  • 针对长时程助手:将记忆作为有状态系统测量(gold vs autonomous memory),并显式报告记忆错误分解;优先覆盖条件约束与冲突场景。
  • 针对规划/自动化:重构系统栈,让 LLM 做模式引导抽取/形式化与确定性映射;用求解器诊断触发定向修复循环,而非自由式重规划。
  • 针对不确定性/弃答:不要只依赖基于采样的语义熵;加入廉价的单次前向信号(如 token 熵),仅在需要时路由到更重的检查。
  • 针对多模态部署:在MLLM 生成图像上重新训练或至少验证伪造图像检测器(范式覆盖数据),并在使用多摄像头输入时加入多视角专项评估(成对视角落地)。
  • 针对对齐干预(steering、adapter):将越狱 ASR 回归测试纳入发布门禁;检查与拒答方向的几何重叠,并评估 steering 是否在简单模板下提高 ASR。

由逐篇分析生成;无外部浏览。