AI Native Paper Engineering: Lessons Learned from the Trenches
AI Native Paper Engineering: Lessons Learned from the Trenches After spending ~$680 and countless hours with AI agents (Codex, Cursor, Gemini, etc.), I’ve learned a lot about what works—and what painfully doesn’t—when using AI to write a technical paper (thesis/journal). This is not "best practice." It’s a raw, evolving experience report. Human in the loop is non-negotiable. 🔥 The Hard Problems We Faced AI can’t read local PDFs well No idea how to scaffold experimental code from zero Can’t articulate requirements clearly to AI Model output doesn’t match expectations Output is slow or unstable Token burn is terrifying One-session addiction (hard to restart) AI moves too fast → ADHD-like chaos 🧠Two Prompting Frameworks That Actually Help SCAFF — for feature/component requests S ituation: tech stack, current progress, design style C hallenge: exact requirements (validation, behavior) A udience: yourself or maintainers F ormat: file names, types, styling F oundations: constr...