Daily AI Paper Report (2026-03-22)

Published:

Chinese version: [中文]

Run stats

  • Candidates: 1253
  • Selected: 30
  • Deepread completed: 30
  • Window (UTC): 2026-03-20T00:00:00Z → 2026-03-21T00:00:00Z (weekend_backlog_unknown, expanded=0)
Show selected papers
arXiv IDTitle / LinksCategoriesScoreWhyTags
2603.14987Beyond Benchmark Islands: Toward Representative Trustworthiness Evaluation for Agentic AI
PDF
cs.CL, cs.DB93Argues for representative trustworthiness eval for agentic AI; proposes HAA framework.agent-evaluation, trustworthiness, sociotechnical, benchmarks, agents
2603.19011Security awareness in LLM agents: the NDAI zone case
PDF
cs.CR, cs.AI92Measures whether LLM agents can infer secure vs insecure execution; key for TEE/tool-use safety.agent-security, TEE, situational-awareness, evaluation, tool-use
2603.18577MedForge: Interpretable Medical Deepfake Detection via Forgery-aware Reasoning
PDF
cs.AI92Large benchmark + grounded reasoning for medical deepfake detection; strong safety relevancedeepfake-detection, multimodal, benchmark, grounded-reasoning, medical-safety, localization
2603.15542InterveneBench: Benchmarking LLMs for Intervention Reasoning and Causal Study Design in Real Social Systems
PDF
cs.CY, cs.AI92InterveneBench: 744 real studies to test LLM causal intervention & design reasoning; strong eval gap.benchmark, evaluation, causal-reasoning, interventions, social-science, LLM
2603.14761BrainBench: Exposing the Commonsense Reasoning Gap in Large Language Models
PDF
cs.AI92New commonsense benchmark; shows big gaps on brainteasers even for frontier LLMs.evaluation, commonsense, reasoning, benchmark, robustness
2603.17623ARES: Scalable and Practical Gradient Inversion Attack in Federated Learning through Activation Recovery
PDF
cs.LG, cs.CR92Practical gradient inversion attack (no arch mods) reconstructs data from large FL batches.federated-learning, privacy, gradient-inversion, security, data-leakage, attack
2603.14730GNNVerifier: Graph-based Verifier for LLM Task Planning
PDF
cs.LG91Non-LLM graph verifier for LLM plans; targets structural hallucinations & dependency errors in agentsagents, planning, verification, hallucinations, graph-methods, robustness
2603.15397SFCoT: Safer Chain-of-Thought via Active Safety Evaluation and Calibration
PDF
cs.CR, cs.AI90Monitors/calibrates unsafe intermediate CoT steps to resist jailbreaks, not just final output.jailbreaks, chain-of-thought, safety-monitoring, calibration, defense
2603.15615Mechanistic Origin of Moral Indifference in Language Models
PDF
cs.CL, cs.AI90Mechanistic study of moral concept collapse + latent “moral indifference”; proposes representation fix.mechanistic-interpretability, alignment, representations, moral-reasoning, safety
2603.18895From Accuracy to Readiness: Metrics and Benchmarks for Human-AI Decision-Making
PDF
cs.HC, cs.AI, cs.LG90Practical readiness metrics for human-AI teaming; targets miscalibrated reliance & safety signals.human-AI teaming, evaluation, calibration, reliance, safety-metrics, deployment
2603.17948VideoAtlas: Navigating Long-Form Video in Logarithmic Compute
PDF
cs.CV, cs.AI90Hierarchical lossless video representation enabling long-video navigation with log compute.long-context, video, agents, memory, efficient-inference, multimodal
2603.18767A Concept is More Than a Word: Diversified Unlearning in Text-to-Image Diffusion Models
PDF
cs.AI89Improves diffusion concept unlearning beyond keywords; reduces brittle/over-forgetting in safety editsdiffusion, unlearning, content-safety, model-editing, robustness
2603.15364CRASH: Cognitive Reasoning Agent for Safety Hazards in Autonomous Driving
PDF
cs.AI, cs.CL89LLM agent for AV incident analysis + curated 2,168-case dataset; practical safety auditingagent, autonomous-driving, safety, incident-analysis, dataset, LLM
2603.15372SKILLS: Structured Knowledge Injection for LLM-Driven Telecommunications Operations
PDF
cs.SE, cs.AI, cs.CR88Tool-using LLM agent benchmark with live mock APIs + deterministic rubrics for telecom ops.agents, tool-use, benchmark, evaluation, enterprise, APIs
2603.14778$p^2$RAG: Privacy-Preserving RAG Service Supporting Arbitrary Top-$k$ Retrieval
PDF
cs.CR, cs.AI88Privacy-preserving RAG enabling arbitrary top-k without costly secure sorting; practical for LLM apps.RAG, privacy, secure-retrieval, cryptography, deployment
2603.17759Harm or Humor: A Multimodal, Multilingual Benchmark for Overt and Covert Harmful Humor
PDF
cs.CL, cs.AI88Multimodal+multilingual benchmark for harmful humor incl. covert harm; strong safety eval valueAI safety, benchmark, harmful content, multimodal, multilingual, humor, toxicity detection, Arabic
2603.17683Sensi: Learn One Thing at a Time -- Curriculum-Based Test-Time Learning for LLM Game Agents
PDF
cs.AI, cs.LG88Structured test-time learning for LLM game agents; curriculum + steerable context control-plane.llm-agents, test-time-learning, curriculum-learning, agent-architecture, memory, evaluation
2603.18680Revisiting Label Inference Attacks in Vertical Federated Learning: Why They Are Vulnerable and How to Defend
PDF
cs.LG, cs.CR88Reframes label inference in VFL via mutual info; explains vulnerabilities and proposes defenses.vertical-federated-learning, privacy, label-inference, mutual-information, defense
2603.18793Functional Subspace Watermarking for Large Language Models
PDF
cs.CR, cs.AI86LLM watermarking robust to fine-tune/quantize/distill by anchoring signals in functional subspace.watermarking, model-ownership, robustness, LLMs, security
2603.14756Towards Privacy-Preserving Machine Translation at the Inference Stage: A New Task and Benchmark
PDF
cs.CL, cs.AI86Defines inference-time privacy task+benchmark for MT; fills evaluation gap for privacy-preserving NLPprivacy, machine-translation, benchmark, inference, evaluation
2603.14771OpenHospital: A Thing-in-itself Arena for Evolving and Benchmarking LLM-based Collective Intelligence
PDF
cs.AI86Interactive arena to evolve/benchmark multi-agent collective intelligence; strong eval framing.agents, multi-agent, collective-intelligence, benchmark, evaluation, healthcare
2603.14911Fine-tuning RoBERTa for CVE-to-CWE Classification: A 125M Parameter Model Competitive with LLMs
PDF
cs.CR, cs.CL86CVE→CWE classifier competitive with LLMs; large dataset + strong macro-F1 on rare classes.cybersecurity, vulnerability-classification, CVE, CWE, robustness, dataset
2603.14855PCodeTrans: Translate Decompiled Pseudocode to Compilable and Executable Equivalent
PDF
cs.SE, cs.AI86Feedback + dynamic validation to prevent semantic hallucinations in decompiled code recovery.code, verification, hallucinations, program-synthesis, security
2603.15566Lore: Repurposing Git Commit Messages as a Structured Knowledge Protocol for AI Coding Agents
PDF
cs.SE, cs.AI, eess.SY86Practical protocol to preserve agent coding rationale in git; improves auditability & safer agent workflowscoding-agents, software-engineering, auditability, agent-workflows, knowledge-management, tooling
2603.09253Efficient Reasoning at Fixed Test-Time Cost via Length-Aware Attention Priors and Gain-Aware Training
PDF
cs.LG86Training-only priors for efficient reasoning at fixed test-time compute; broadly reusable.efficient-reasoning, test-time-compute, attention, training-tricks, transformers
2603.18570Attack by Unlearning: Unlearning-Induced Adversarial Attacks on Graph Neural Networks
PDF
cs.LG, cs.CR85Shows approximate unlearning can be weaponized into attacks; introduces unlearning corruption.machine-unlearning, adversarial-attacks, privacy, GNNs, security
2603.17522Detecting the Machine: A Comprehensive Benchmark of AI-Generated Text Detectors Across Architectures, Domains, and Adversarial Conditions
PDF
cs.CL, cs.AI84Broad benchmark of AI-text detectors across domains/LLMs with adversarial conditions; useful for eval.evaluation, AI-generated-text, robustness, adversarial, benchmark
2603.18538Beyond Passive Aggregation: Active Auditing and Topology-Aware Defense in Decentralized Federated Learning
PDF
cs.LG, stat.ME84Active auditing metrics + topology-aware defenses for decentralized FL backdoors; practical security anglefederated-learning, backdoors, auditing, anomaly-detection, security, graph-topology
2603.19182Box Maze: A Process-Control Architecture for Reliable LLM Reasoning
PDF
cs.AI, cs.CL84Process-control architecture to reduce hallucination/adversarial failures; safety-oriented framingLLM-safety, hallucination, robustness, process-supervision, architecture, adversarial
2603.15421CLAG: Adaptive Memory Organization via Agent-Driven Clustering for Small Language Model Agents
PDF
cs.CL, cs.AI84Agent memory clustering to reduce irrelevant/corrupt context; practical for small-model agentsagents, memory, retrieval, small language models, RAG, context management, robustness

AI Paper Insight Brief

2026-03-22

0) Executive takeaways (read this first)

  • Verification is shifting from “ask another LLM” to structured, inspectable signals: graph-structured plan verification with node/edge risk (GNNVerifier) and stepwise CoT safety scoring + intervention (SFCoT) both show large robustness gains versus prompt-only baselines.
  • Privacy/security work is becoming more “systems-realistic”: private RAG now targets arbitrary large top‑k efficiently (p²RAG), FL attacks remove “architecture modification” assumptions (ARES), and VFL defenses exploit where label information actually lives (move the cut layer).
  • Benchmarks are getting more diagnostic (and more multi-dimensional): BrainBench separates accuracy vs consistency (stochasticity), harmful-humor adds multimodal + Arabic + implicit harm, and AI-text detection is stress-tested under length-matching + domain shift + adversarial rewriting.
  • Agent reliability bottlenecks are increasingly about representation and memory organization: CLAG’s cluster-local memory evolution improves SLM robustness and latency; “moral indifference” work argues behavioral alignment can leave latent geometry misaligned and shows SAE-based steering improves adversarial safety metrics.
  • Execution-grounded feedback loops beat static checks in code/security pipelines: PCodeTrans uses in-situ binary substitution + ASan + differential tracing to drive LLM repair to near-perfect function-level equivalence on coreutils/binutils.

2) Key themes (clusters)

Theme: Structured verification & process-level safety for agents

  • Why it matters: Agent failures often come from cross-step structure (plans) or intermediate reasoning (CoT) that final-answer filters miss. Verifiers that expose where things go wrong enable targeted fixes and safer autonomy.
  • Representative papers:
  • Common approach:
    • Convert unstructured agent artifacts into structured objects (plan graphs; stepwise CoT segments; scenario distributions).
    • Produce localized diagnostics (node/edge risk; per-step safety scores) and gate edits/continuations on verifier signals.
    • Use synthetic supervision / controlled perturbations when real fine-grained labels are missing (plan-graph perturbations; scenario suites).
  • Open questions / failure modes:
    • Synthetic perturbations may not match real planner errors (distribution gap in GNNVerifier).
    • Runtime overhead and scalability of stepwise CoT evaluation + paraphrase variance checks (SFCoT doesn’t report latency).
    • “Representative scenario sampling” remains under-validated at scale (HAAF demo is 24 scenarios, single model).

Theme: Privacy-preserving inference & leakage-aware ML systems

Theme: Memory, long-context navigation, and fixed-compute efficiency

Theme: Benchmarks that expose reliability gaps (stochasticity, shift, implicit harm)

Theme: Security & provenance for models and ML pipelines

3) Technical synthesis

  • “Structure-first” is a recurring pattern: plans→graphs (GNNVerifier), CoT→steps (SFCoT), memory→clusters (CLAG), video→recursive grids (VideoAtlas). The shared bet is that explicit structure enables better diagnostics, gating, and compute control.
  • Synthetic supervision is becoming the default when fine-grained labels are missing: plan perturbations (REPLACE/DROP/COMPRESS), sandbox scenarios (HAAF), synthetic patients (OpenHospital), medical forgery generation (MedForge-90K).
  • Verification loops increasingly require acceptance criteria: GNNVerifier accepts edits only if graph score improves; SFCoT rewrites/truncates based on per-step safety; PCodeTrans iterates until tests + ASan/BP-Diff pass.
  • Compute budgeting is being formalized as a first-class knob: VideoAtlas depth bound d; RPA cached bias + training-only controller; CLAG two-stage retrieval reduces search space and latency.
  • Information localization matters for privacy: VFL shows label information concentrates in deeper/top layers; defenses can be structural (cut-layer placement) rather than noise-only.
  • Attack realism is increasing: ARES assumes attacker can set weights/biases (no architecture change) and uses sparse recovery; unlearning corruption uses legally-mandated deletion as the trigger; p²RAG targets arbitrary top‑k (practical long-context use).
  • Reliability is being measured as variance, not just mean: BrainBench’s accuracy–consistency gap (10.3 pp average) highlights stochastic reasoning as a safety/reliability axis.
  • “Judge models” are everywhere, but with different roles: grading (InterveneBench), disclosure scoring (NDAI-zone study), reasoning quality (MedForge), and BrainBench answer judging—raising a cross-cutting concern about judge bias and reproducibility.
  • Execution-grounded evaluation is a strong differentiator: PCodeTrans uses the original binary + official test suites as an oracle; this is a template for reducing “semantic hallucination” in code transformations.

4) Top 5 papers (with “why now”)

1) GNNVerifier: Graph-based Verifier for LLM Task Planning

  • Adds a graph-structured verifier that scores whole plans and localizes risky nodes/edges (tool/step mismatches, dependency issues).
  • Uses synthetic perturbations to create node/edge supervision where real labels are missing, enabling diagnosis heads.
  • Demonstrates verification-guided local edits (replace/insert) accepted only when the verifier score improves; reports consistent gains vs VeriPlan across datasets/planners.
  • Skepticism: synthetic error distribution may not match real planner failures; no live tool-execution evaluation.

2) $p^2$RAG: Privacy-Preserving RAG Service Supporting Arbitrary Top-$k$ Retrieval

  • Replaces secure sorting with interactive bisection to support arbitrary/large k efficiently—aligned with long-context LLM trends.
  • Uses standard MPC primitives (Shamir sharing, Beaver triples, DCFs) and reports 3–300× speedups vs PRAG for k=16–1024.
  • Provides explicit leakage bounds (physical leakage O(log²N) + functional leakage k+ξ).
  • Skepticism: assumes trusted dealer + two non-colluding semi-honest servers; PIR and offline stages not benchmarked.

3) SFCoT: Safer Chain-of-Thought via Active Safety Evaluation and Calibration

  • Moves safety from final-output filtering to stepwise CoT monitoring with lexical/semantic/policy scoring and gray-zone calibration.
  • Reports a large jailbreak reduction: ASR 58.97% → 12.31%, while preserving ~91.2% average utility on MMLU/GSM8K/MBPP.
  • Ablations attribute gains to the consistency verifier and rewrite intervention.
  • Skepticism: runtime/latency overhead not reported; evaluated on a single model (Qwen3-8B).

4) PCodeTrans: Translate Decompiled Pseudocode to Compilable and Executable Equivalent

  • Introduces in-situ substitutable execution: hot-swap repaired functions into the original binary to use real execution as an equivalence oracle.
  • Uses ASan (substitute-only) + breakpoint-matched differential tracing to generate actionable runtime deltas for iterative LLM repair.
  • Achieves 100% function-level compilation and ~99.6–99.9% behavioral equivalence on coreutils/binutils (unstripped).
  • Skepticism: platform-specific (Linux ELF/x86_64); indirect-call signature recovery and standalone recompilation remain hard.

5) Mechanistic Origin of Moral Indifference in Language Models

  • Diagnoses “moral indifference” as a latent-geometry problem (categorical/gradient/structural/dimensional) using a prototype-based moral vector ground truth.
  • Uses SAEs + targeted feature fine-tuning + additive steering to improve adversarial safety outcomes on Flames (e.g., PSC1 908→953; win-rate peak 75.4%).
  • Bridges mechanistic interpretability with alignment by showing a causal intervention on internal features.
  • Skepticism: intervention demonstrated mainly on Qwen3-8B; only a tiny fraction of SAE features correlate with moral dimensions; steering is sensitive to α.

5) Practical next steps

  • If you build tool-using agents: prototype a plan-graph verifier that outputs node/edge risk and use it to drive local edits with acceptance tests (score must improve), mirroring GNNVerifier.
  • For jailbreak defense in CoT-enabled systems: measure ASR with and without stepwise CoT gating; log per-step safety scores and quantify utility retention on your core tasks (SFCoT-style).
  • For private RAG: evaluate whether your product needs dynamic/large top‑k; if yes, benchmark threshold/bisection-style retrieval vs sorting-based secure top‑k under realistic RTT and PIR costs (p²RAG highlights what to measure).
  • For federated/vertical FL deployments: run MI-by-layer diagnostics to see where label information concentrates, then test cut-layer advancement as a zero-overhead mitigation—while also measuring feature leakage risk (VFL paper’s trade-off).
  • For long-context memory in small agents: try cluster-local memory evolution + two-stage retrieval and track both answer quality and latency; ablate localized evolution vs global retrieval (CLAG).
  • For evaluation: add multi-run consistency (not just accuracy) to your internal reasoning benchmarks (BrainBench protocol), and include domain shift + adversarial rewriting if you rely on AI-text detectors.
  • For provenance/IP: if you distribute models that may be quantized/distilled, test subspace watermark robustness under your actual transformation pipeline and keep payload modest (FSW suggests ~16-bit practical capacity).

Generated from per-paper analyses; no external browsing.