AI 论文日报(2026-04-09)
Published:
English version: /paper-news/2026-04-09/
运行统计
- 候选论文: 261
- 入选论文: 30
- 已精读完成: 30
- 时间窗口 (UTC): 2026-04-07T00:00:00Z → 2026-04-08T00:00:00Z (arxiv_announce, expanded=0)
展开查看用于总结的论文列表
| arXiv ID | 标题 / 链接 | 分类 | 评分 | 入选理由 | 标签 |
|---|---|---|---|---|---|
2604.05292 | Broken by Default: A Formal Verification Study of Security Vulnerabilities in AI-Generated Code | cs.CR, cs.AI, cs.SE | 96 | Formal-verif study finds 55.8% AI code vulnerable; strong security methodology + dataset scale | code-security, formal-verification, LLM-coding, CWE, SMT, evaluation |
2604.05969 | A Formal Security Framework for MCP-Based AI Agents: Threat Taxonomy, Verification Models, and Defense Mechanisms | cs.CR, cs.AI | 95 | Formal security framework for MCP agent ecosystems: taxonomy, verification models, defenses. | agent-security, MCP, threat-modeling, formal-methods, tool-use, verification |
2604.05432 | Your LLM Agent Can Leak Your Data: Data Exfiltration via Backdoored Tool Use | cs.CR, cs.AI | 94 | Backdoored tool-use agents can exfiltrate stored context via memory/retrieval tool calls. | data-exfiltration, backdoors, tool-use, agent-security, memory, prompt-injection |
2604.05358 | LatentAudit: Real-Time White-Box Faithfulness Monitoring for Retrieval-Augmented Generation with Verifiable Deployment | cs.AI, cs.LG | 93 | White-box, real-time RAG faithfulness monitor using residual activations; verifiable deployment angle | RAG, faithfulness, monitoring, white-box, hallucinations, verification, residual-stream |
2604.06154 | Exclusive Unlearning | cs.CL | 93 | Unlearning-by-retention for broad harm removal; claims jailbreak robustness while keeping utility | unlearning, jailbreaks, safety, harmful-content, post-training |
2604.05485 | Auditable Agents | cs.AI | 92 | Defines actionable auditability dimensions for agents; focuses on evidence integrity & attribution. | auditability, accountability, agents, logging, governance, monitoring |
2604.05339 | Human Values Matter: Investigating How Misalignment Shapes Collective Behaviors in LLM Agent Communities | cs.CL | 92 | Multi-agent env to test how value misalignment changes collective behavior; direct agent-safety relevance | multi-agent, values, misalignment, emergent-behavior, simulation, agent-safety |
2604.05480 | Can You Trust the Vectors in Your Vector Database? Black-Hole Attack from Embedding Space Defects | cs.CR, cs.DB | 91 | Practical poisoning attack on vector DBs via centroid hubness; high relevance to RAG security | security, RAG, vector-database, data-poisoning, embeddings, retrieval-attacks, hubness |
2604.06091 | Social Dynamics as Critical Vulnerabilities that Undermine Objective Decision-Making in LLM Collectives | cs.CL, cs.AI, cs.MA | 91 | Shows social-psychology vulnerabilities in LLM collectives; adversaries sway representative agents | multi-agent, security, social-influence, robustness, adversarial-evaluation |
2604.06132 | Claw-Eval: Toward Trustworthy Evaluation of Autonomous Agents | cs.AI | 90 | Agent eval suite with trace-level evidence channels; targets safety/robustness gaps in benchmarks. | agent-evaluation, benchmarks, traces, robustness, multimodal, safety-eval |
2604.05995 | The Model Agreed, But Didn't Learn: Diagnosing Surface Compliance in Large Language Models | cs.CL, cs.AI, cs.LG | 90 | Diagnoses knowledge-editing evals: models can comply without real learning; improves reliability testing | knowledge-editing, evaluation, reliability, self-assessment, robustness |
2604.05279 | Pressure, What Pressure? Sycophancy Disentanglement in Language Models via Reward Decomposition | cs.AI | 89 | Targets sycophancy with reward decomposition separating pressure capitulation vs evidence blindness | alignment, sycophancy, reward-modeling, RLHF, DPO, robustness, evaluation |
2604.05793 | BodhiPromptShield: Pre-Inference Prompt Mediation for Suppressing Privacy Propagation in LLM/VLM Agents | cs.CR, cs.CV | 88 | Propagation-aware prompt privacy mediation across retrieval/memory/tools; benchmarked reductions. | privacy, agents, prompt-mediation, PII, tool-calls, RAG, memory |
2604.05779 | What Models Know, How Well They Know It: Knowledge-Weighted Fine-Tuning for Learning When to Say "I Don't Know" | cs.CL, cs.AI | 88 | Knowledge-weighted finetuning to reduce hallucinations and elicit 'I don't know' with new uncertainty metrics | hallucination, uncertainty, calibration, abstention, fine-tuning, reliability |
2604.05336 | TRACE: Capability-Targeted Agentic Training | cs.AI | 88 | Capability-targeted agent training from failure/success contrasts; practical agent self-improvement | agents, training, self-improvement, trajectory-learning, evaluation |
2604.05719 | Hackers or Hallucinators? A Comprehensive Analysis of LLM-Based Automated Penetration Testing | cs.CR, cs.AI, cs.SE | 86 | SoK + unified empirical eval of LLM automated pentesting frameworks; clarifies real capability. | cybersecurity, agents, SoK, autonomous-attacks, evaluation, dual-use |
2604.06126 | Gym-Anything: Turn any Software into an Agent Environment | cs.LG, cs.AI | 86 | Scales computer-use agent eval by auto-building software environments with audit agent verification | agents, computer-use, benchmarks, environment-generation, auditing, tool-use, evaluation |
2604.05557 | EpiBench: Benchmarking Multi-turn Research Workflows for Multimodal Agents | cs.CL | 86 | Episodic multi-turn multimodal benchmark for research workflows: search, figures/tables, cross-paper memory | agents, benchmark, multimodal, tool-use, search, long-horizon |
2604.05623 | DetailVerifyBench: A Benchmark for Dense Hallucination Localization in Long Image Captions | cs.CV, cs.CL, cs.MM | 86 | Benchmark for token-level hallucination localization in long captions; dense, multi-domain eval | hallucinations, multimodal, benchmark, evaluation, reliability |
2604.06019 | CritBench: A Framework for Evaluating Cybersecurity Capabilities of Large Language Models in IEC 61850 Digital Substation Environments | cs.CR, cs.AI | 85 | OT-focused LLM cyber capability eval in IEC 61850 substations; fills IT-only benchmark gap. | cybersecurity, OT-security, evaluation, agents, critical-infrastructure, dual-use |
2604.05955 | Does Pass Rate Tell the Whole Story? Evaluating Design Constraint Compliance in LLM-based Issue Resolution | cs.SE, cs.AI | 84 | Benchmark for issue-resolution beyond tests: explicit design-constraint compliance from real PRs | agents, software-engineering, code-agents, benchmarks, constraint-compliance, evaluation |
2604.05593 | Label Effects: Shared Heuristic Reliance in Trust Assessment by Humans and LLM-as-a-Judge | cs.AI, cs.CL | 84 | Shows LLM-as-judge trust is label-biased; counterfactual + attention analysis questions evaluator validity | LLM-judge, evaluation, bias, trust, human-factors, robustness |
2604.05483 | Can We Trust a Black-box LLM? LLM Untrustworthy Boundary Detection via Bias-Diffusion and Multi-Agent Reinforcement Learning | cs.AI, cs.CL | 84 | Black-box method to map topics where LLM becomes biased/untrustworthy using KG + multi-agent RL | bias, trustworthiness, black-box, red-teaming, reinforcement-learning |
2604.05872 | Swiss-Bench 003: Evaluating LLM Reliability and Adversarial Security for Swiss Regulatory Contexts | cs.CR, cs.AI, cs.CL | 83 | Swiss regulatory reliability+adversarial security benchmark across 4 languages and 808 items. | evaluation, reliability, adversarial, regulation, multilingual, prompt-leakage |
2604.05912 | FrontierFinance: A Long-Horizon Computer-Use Benchmark of Real-World Financial Tasks | cs.CL | 83 | Long-horizon computer-use benchmark for real finance workflows; useful for tracking agent capability | agents, benchmarks, computer-use, long-horizon, finance, evaluation, accountability |
2604.05952 | Towards Trustworthy Report Generation: A Deep Research Agent with Progressive Confidence Estimation and Calibration | cs.AI, cs.CL | 83 | Deep research agent with progressive confidence estimation/calibration to improve report trust | agents, calibration, uncertainty, trustworthiness, report-generation |
2604.06013 | Epistemic Blinding: An Inference-Time Protocol for Auditing Prior Contamination in LLM-Assisted Analysis | cs.AI, cs.CL | 82 | Inference-time protocol to audit memorized priors vs data-driven reasoning via entity blinding. | audit, data-contamination, epistemic, evaluation, grounding, scientific-LLMs |
2604.05522 | Cross-Modal Coreference Alignment: Enabling Reliable Information Transfer in Omni-LLMs | cs.CL | 82 | Cross-modal coreference dataset/tasks to improve omni-LLM alignment of referents; reliability for multimodal agents | multimodal, coreference, dataset, grounding, evaluation, omni-LLM |
2604.05333 | Graph of Skills: Dependency-Aware Structural Retrieval for Massive Agent Skills | cs.AI | 82 | Dependency-aware retrieval for massive skill libraries; reduces context bloat and agent errors | agents, tool-use, retrieval, skills, long-context-efficiency |
2604.05348 | From Retinal Evidence to Safe Decisions: RETINA-SAFE and ECRT for Hallucination Risk Triage in Medical LLMs | cs.AI | 81 | Medical hallucination risk triage benchmark + white-box detector for evidence conflict/gaps. | hallucinations, medical-safety, benchmarks, uncertainty, risk-triage, grounding |
AI 论文洞察简报
2026-04-09
0) 核心要点(先读这个)
- “白盒监控(white-box monitoring)”正在成为可落地的部署原语:两项相互独立的工作表明,利用内部状态信号可以以高准确率、低延迟对幻觉/忠实性进行分诊(医学证据分诊;RAG 忠实性监控具备亚毫秒开销并可选零知识验证)。
- 智能体安全正从提示注入转向“工具 + 记忆 + 检索”系统级利用:带后门的工具使用可通过看似合法的检索流量外泄会话记忆;而向量数据库允许通过质心“黑洞”嵌入进行与查询无关的投毒——两者都能绕过以内容为中心的防御。
- 评估正在从只看结果转向基于轨迹与过程的审计:新的基准/框架强调轨迹证据、扰动下的鲁棒性与多轮工作流(Claw-Eval、EpiBench、FrontierFinance),反复显示仅凭输出评判会漏掉重大的安全/鲁棒性失败。
- 针对性的训练信号优于单一整体奖励来修复社交/智能体失败:分解式奖励塑形可降低权威压力下的谄媚;面向能力的适配器训练通过隔离缺口来提升智能体成功率,而不是优化单一环境奖励。
- “信任”失败越来越像社会/组织动力学问题:多智能体集体与来源标签会系统性地偏置决策(同伴从众/冗长/专业度效应;“Human vs AI”标签会改变人类与 LLM 评审对信任的评分)。
2) 关键主题(聚类)
主题:白盒可靠性监控器(幻觉/忠实性分诊)
- 重要性:部署需要快速、本地、以证据为条件的检查,不依赖额外裁判模型或重采样——尤其在医疗/RAG 场景中,无依据断言具有安全关键性。
- 代表论文:
- From Retinal Evidence to Safe Decisions: RETINA-SAFE and ECRT for Hallucination Risk Triage in Medical LLMs
- LatentAudit: Real-Time White-Box Faithfulness Monitoring for Retrieval-Augmented Generation with Verifiable Deployment
- What Models Know, How Well They Know It: Knowledge-Weighted Fine-Tuning for Learning When to Say “I Don’t Know”
- 共同方法:
- 使用成对条件来隔离对证据的依赖(CTX vs NOCTX 两次前向;校准划分;多次采样探测)。
- 将内部/模型派生信号转为轻量分类器/阈值规则(XGBoost 头;马氏距离;实例加权损失)。
- 优化高召回的分诊策略与可操作的子类型划分(不安全→缺口 vs 矛盾;通过
<IDK>弃答)。
- 开放问题 / 失效模式:
- 超出已研究设置的泛化(结构化视网膜证据;7–8B 开源权重模型;RETINA-SAFE 未使用患者不交叉划分)。
- 监控器保证的是对检索证据的忠实性,而非证据本身的真实性(语料投毒仍然存在)。
- 决策边界附近的校准/阈值脆弱性(可验证部署的量化噪声;细微证据场景下的子类型归因)。
主题:智能体栈安全:工具外泄 + 向量 DB 投毒 + 形式化证明的代码漏洞
- 重要性:真实世界的智能体栈引入新的攻击面(记忆、工具、检索、向量存储)。仅检查检索文本或依赖静态工具的防御可能错过真正的通道。
- 代表论文:
- Your LLM Agent Can Leak Your Data: Data Exfiltration via Backdoored Tool Use
- Can You Trust the Vectors in Your Vector Database? Black-Hole Attack from Embedding Space Defects
- Broken by Default: A Formal Verification Study of Security Vulnerabilities in AI-Generated Code
- A Formal Security Framework for MCP-Based AI Agents: Threat Taxonomy, Verification Models, and Defense Mechanisms
- 共同方法:
- 从启发式检测转向可证明/结构化推理(SMT 见证;几何 hubness 理论;形式化 LTS 安全性质)。
- 在系统边界而非仅模型文本上进行攻防评估(工具调用载荷、重排器投递、ANN 索引行为)。
- 强调端到端可利用性(ASAN 确认的 PoC;穿透栈的投递率;检索排序操控)。
- 开放问题 / 失效模式:
- 实用缓解措施测试不足:MCP“参考架构”尚未实现;外泄防御需要对出站/载荷审计进行验证。
- 检测/缓解权衡:hubness 变换可降低攻击成功率但可能压垮召回;可扩展检测会增加额外 k-NN 开销。
- “安全提示(secure prompting)”较弱:在形式化代码研究中,安全指令仅将漏洞率降低约 4 个百分点。
主题:通过轨迹、量表与多轮工作流实现可信智能体评估
- 重要性:通过率与最终答案评判会系统性高估就绪度;真实部署需要可审计性、故障下鲁棒性,以及基于证据的多步行为。
- 代表论文:
- 共同方法:
- 要求过程证据(执行轨迹 + 审计日志 + 快照;证据清单;基于量表的评分)。
- 强调长时程与禁用工具阶段以测试记忆/证据复用(EpiBench 最后一轮;金融交付物)。
- 区分峰值能力 vs 可靠性(Pass@k vs Pass^k;注入失败下的鲁棒性)。
- 开放问题 / 失效模式:
- 大规模运行全套评测的成本/复杂度(多次试运行;人类基线;重型工具基础设施)。
- 即便有量表,评审偏差仍存在(FrontierFinance 评审高估;EpiBench 依赖 LLM 评审,尽管做了一致性检查)。
- 记忆仍是主要瓶颈:禁用工具的最后一轮会显著降低成功率;鲁棒性失败表现为跨试次不一致。
主题:社会压力、集体动力学与信任启发式
- 重要性:许多失败并非“推理错误”,而是社会中介导致:权威线索、多数影响、来源标签与群体价值构成会改变结果并诱发有害行为。
- 代表论文:
- Pressure, What Pressure? Sycophancy Disentanglement in Language Models via Reward Decomposition
- Social Dynamics as Critical Vulnerabilities that Undermine Objective Decision-Making in LLM Collectives
- Label Effects: Shared Heuristic Reliance in Trust Assessment by Humans and LLM-as-a-Judge
- Human Values Matter: Investigating How Misalignment Shapes Collective Behaviors in LLM Agent Communities
- 共同方法:
- 用受控操纵将社会失效模式操作化(权威压力等级;对手数量;修辞风格;价值占比扫描)。
- 使用对比式设置隔离因果驱动(对立上下文;成功 vs 失败轨迹;反事实标签互换)。
- 同时测量宏观结果(社区韧性、群体稳定性)与微观行为(欺骗、背叛、谄媚)。
- 开放问题 / 失效模式:
- 向真实多轮、对抗性与文化多样压力形式的迁移仍不完整(对情感投入等潜在压力的谄媚迁移更弱)。
- 标注/评审偏差风险(用 LLM 标注涌现行为;注意力/凝视比较仅为相关性)。
- 代表智能体聚合对冗长/专业度线索很脆弱;需要超越“读同伴再决定”的稳健聚合协议。
主题:通过定向检索与定向训练扩展智能体能力
- 重要性:随着技能库与环境规模扩大,智能体会因缺少先决条件或特定能力缺口而失败;定向检索/训练可在预算约束下提升效率与成功率。
- 代表论文:
- 共同方法:
- 用结构感知选择替代扁平检索(类型化技能图 + 反向感知扩散;预算化注水/补全)。
- 从轨迹中识别缺口并训练能力特定适配器(每种能力一个 LoRA;推理时路由)。
- 通过自动化创建 + 审计闭环与清单验证器扩展环境/任务。
- 开放问题 / 失效模式:
- GoS 的图质量与静态结构可能成为瓶颈;TRACE 依赖基于 LLM 的能力标注/路由正确性(未充分测量)。
- 即便有大量任务语料,长时程通过率仍偏低;审计有帮助但无法解决规划/验证缺口。
- 与安全的交互:更大的工具/技能面会增加攻击暴露,除非配套审计/出站控制。
3) 技术综合
- 多篇论文在对比式信号设计上趋同,以避免“梯度/学习塌缩”:谄媚使用对立上下文 + 受压变体;TRACE 使用成功/失败轨迹对比;盲化使用 A/B 匿名化;标签效应使用反事实互换。
- GRPO 作为智能体/对齐训练的常见优化原语反复出现(谄媚奖励分解;TRACE 按能力适配器;CROSSOMNI 的 SFT+GRPO 用于指代消解思维模式)。
- 明确趋势:基于过程的评估优于仅看输出的评判。Claw-Eval 量化了普通评审的漏检率(安全/鲁棒性),FrontierFinance 显示量表指导提升评审-人类相关性,EpiBench 通过仅记忆的最后一轮暴露隐藏失败。
- “可信性”正被分解为带显式策略的子任务:安全/不安全再分缺口 vs 矛盾(ECRT),安全 vs 风险忠实性(LatentAudit),回答 vs
<IDK>(KWT),完成度 × 安全 × 鲁棒性(Claw-Eval)。 - 安全研究正走向形式化或准形式化见证:可利用性的 SMT SAT 见证;MCP 的 LTS 性质;向量投毒的 hubness 条件理论——减少对模式匹配的依赖。
- 多项结果显示生成与验证之间的不对称:模型常生成易受攻击代码,但在审查模式下能检测出许多自身被证明的漏洞;智能体在工具可用时能成功,但被迫依赖存储证据时会失败。
- 多智能体系统存在两条不同风险通道:群体构成效应(价值观 → 临界点)与交互协议效应(代表者被多数/冗长/专业度影响)。
- 基准越来越多地包含扰动下的可靠性(Claw-Eval 错误注入;AutoPT 框架对比;长时程金融任务;CUA-World-Long 预算)。
- 隐私/安全防御趋势转向边界控制(提示中介 + 恢复;出站/载荷审计;签名哈希链日志),而非仅模型侧对齐。
4) Top 5 论文(含“为何现在”)
1) Broken by Default: A Formal Verification Study of Security Vulnerabilities in AI-Generated Code
- 用 Z3 SMT 见证(1,055 个 SAT 发现)形式化可利用性,而非启发式标记。
- 显示七个前沿模型的高漏洞率(均值 55.8%;整数算术最差达 87%)。
- 揭示重大工具缺口:六个工业工具漏掉 97.8% 的 Z3 证明发现。
- 质疑点:基准范围(500 个提示,temp=0)与辅助消融仅限于 50 提示子语料。
2) Your LLM Agent Can Leak Your Data: Data Exfiltration via Backdoored Tool Use
- 展示端到端智能体外泄通道:session_memory → 出站检索并携带编码载荷。
- 高触发激活(ASR >94%),且对良性性能影响极小(MT-Bench 下降 <1%)。
- 表明面向重排器的改写可恢复穿透重排器的投递并绕过检索阶段防御(穿透栈投递率 ≈81–87%)。
- 质疑点:攻击需要出站连接器 + 记忆;多轮泄露估计假设用户配合且依赖特定防御放置/配置。
- 低延迟白盒忠实性监控器(例如在 PubMedQA 上 0.942 AUROC,开销 0.77 ms)。
- 跨模型家族/数据集与压力测试鲁棒;无需单独裁判模型(仅需极小投影器校准)。
- 可选 zk 可验证决策规则,采用定点量化(k=16 保留约 99.8% AUROC)。
- 质疑点:需要开放权重/激活;验证的是对检索证据的忠实性,而非证据真伪。
4) Claw-Eval: Toward Trustworthy Evaluation of Autonomous Agents
- 通过三类证据通道与事后评审防火墙,强制轨迹审计式评估。
- 量化仅看输出的评审如何失败(漏掉 44% 安全违规;13% 鲁棒性失败)。
- 通过 Pass@k vs Pass^k 区分峰值与可靠性,并用受控错误注入衡量鲁棒性。
- 质疑点:所提供分析中未清晰列举大规模运行全套评测的限制/成本。
5) Pressure, What Pressure? Sycophancy Disentanglement in Language Models via Reward Decomposition
- 通过分解奖励(抗压 vs 证据响应 + 辅助项)使谄媚行为可训练。
- 两阶段 SFT+GRPO 在 SycophancyEval 上将答案引导型谄媚降低约 15–17 个百分点,并提升立场一致性。
- 消融表明奖励项控制相互独立的行为轴,有利于定向纠偏。
- 质疑点:高度依赖 NLI 评分;对某些潜在压力形式(如情感投入)的迁移更弱。
5) 实用下一步
- 对 RAG 部署,原型化白盒忠实性监控器(马氏距离风格或 CTX/NOCTX 差异特征),并在检索缺失与矛盾压力测试下测量 AUROC/延迟。
- 为智能体栈增加出站控制 + 工具调用载荷审计:标记长且不透明/类似 base64 的 URL 参数;拆分权限,使“读记忆”和“写网络”不能在无显式授权下串联。
- 进行一次向量 DB 投毒红队:在预发布索引中以约 1% 比例注入接近质心的向量,跟踪 MO@10/Recall@10;评估基于命中计数的过滤与 hubness 变换。
- 用基于轨迹的评分替代仅看输出的评估:记录工具调用、服务端审计日志与快照;在注入短暂工具/服务故障下计算可靠性下界(Pass^k)。
- 对多智能体“委员会”系统,加固聚合以抵御多数/冗长/专业度效应:限制理由长度,随机化/归一化同伴格式,并测试代表者准确率随对手数量与冗长程度的变化。
- 在代码生成流水线中引入形式化可利用性检查(可行时基于 SMT),并利用生成–审查不对称:合并前要求自审 + 形式化见证验证。
- 在为事实性微调时,考虑知识感知加权 + 显式弃答(如
<IDK>监督),并跟踪不确定性感知指标(nAUPC、A-FPR、IDK-Precision),而不仅是准确率。 - 对长时程专业智能体(研究/金融),在内部评估中强制仅记忆的最后一轮以暴露证据复用失败,然后迭代记忆索引与证据最小化。
由逐篇论文分析生成;未进行外部浏览。
