Fun Prompt Friday: 3rd Halloween Edition

Listen to a spooky-good Halloween-themed audio overview about these prompts by two over-caffeinated co-hosts.

Steve’s Best Prompts:
Quick Copy-Paste Sheet

Steve Little | Halloween, Oct 31, 2025 | v7.0h | CC BY-NC 4.0 | PDF of Print-friendly version

🎃 TRICK OR TREAT — Third Halloween Edition 🎃 Sixteen prompts to conjure better AI responses

Note from Steve: These are teaching tools from years of testing, not magic spells. If it’s not obvious what one does, cogitate on it or ask your chatbot for insight. Use them wisely—respect your AI platform’s terms of service and copyright. Also, keep in mind that many are designed to be appended to other material, your own prompts, or to reference a webpage you’re viewing. Remember: AI is a tool, not a truth oracle. Results vary… I’m not responsible for summoned demons or hallucinated ancestors. Happy prompting! 🎃

QUICK REFERENCE

PromptBest ForComplexity
DAAIQuick content analysis
RCSIQuality improvement cycle
EXTRACT_ALLExhaustive information capture
VITAL_FEW80/20 learning⭐⭐
TALKING_POINTSArticle to bullet points⭐⭐
EXPLAIN_LAYPEOPLESimplifying complex topics⭐⭐
RESEARCH_PLANAutonomous agent planning⭐⭐⭐
ABRIDGEProfessional summarization⭐⭐⭐
CONDENSE_DENSITYHigh-density compression⭐⭐⭐
ABCD_METHODIterative refinement⭐⭐⭐
BUILD_PROMPTCollaborative prompt engineering⭐⭐⭐
RESEARCH_DESIGNTool framework building⭐⭐⭐
SUMMARIZE_CHATConversation documentation⭐⭐⭐
RESEARCH_ASSIGNMENTDeep research structuring⭐⭐⭐⭐
COUNCIL_OF_EXPERTSMulti-perspective analysis⭐⭐⭐⭐⭐

<STEVE’S PROMPTS — QUICK COPY-PASTE SHEET v07h_2025-10-31 – CC BY-NC 4.0>

📊 ANALYSIS

DAAI (Describe, Abstract (transcribe), Analyze, Interpret)

Four-word analysis command that describes content, abstracts key points, analyzes structure and meaning, then interprets significance. When appropriate, use can use “Transcribe” instead of “Abstract.”

<DAAI>
Describe. Abstract. Analyze. Interpret.
</DAAI>

RCSI (Review, Critique, Suggest, Improve.)

Four-step quality improvement cycle that reviews content, critiques it, suggests improvements, then implements those improvements.

<RCSI>
Review. Critique. Suggest. Improve.
</RCSI>

EXTRACT_ALL

Single command for exhaustive extraction of all genealogical, historic, and cultural information from any content.

<EXTRACT_ALL>
Capture all genealogical, historic, and cultural information contained here.
</EXTRACT_ALL>

📝 CONDENSATION

ABRIDGE

Create professional abridgment at specified word count where AI reviews best practices internally before generating condensed version.

<ABRIDGE>
Generate an abridged version of the text of about [LENGTH] words; first though, review in silence best practices of abridgment (goal, method, style, etc.), then write the full abridged version.
</ABRIDGE>

CONDENSE_DENSITY

Reduce content to target length while maximizing information retention so semantic density increases inversely to length reduction.

<CONDENSE_DENSITY>
Condense and distill that to about [LENGTH], increasing the semantic density inversely proportional to the cut in length, so there is as little loss of information as possible.
</CONDENSE_DENSITY>

TALKING_POINTS

Convert any article into 5-7 concise bullet-point sentences formatted as talking points.

<TALKING_POINTS>
Summarize this material, presenting your response as a list of no more than 5 to 7 bullet point sentences, as if talking points for a reporter covering this topic.
</TALKING_POINTS>

🔧 META-PROMPTS

BUILD_PROMPT

Collaborative prompt engineering where AI acts as partner to help build framework for new prompt without executing it, showing structure before implementation; used to build an AI assistant, OpenAI “Custom GPT”, Google Gemini “Gem”, “Project” or workspace instructions at all the major AI vendors.

<BUILD_PROMPT>
I need help crafting a prompt. So, pretend you are an AI engineer, helping me craft a prompt or instruction set to guide an LLM assistant, look at the context above and plan a framework to turn that into a assignment. Do not execute the assignment right now. We're just drafting the assignment prompt. Begin just by researching and reporting on the basic usage and best practices of the model, product, feature, assistant/agent, or workspace you are building, showing me the framework that you plan to create the assignment prompt/instruction set. Show me that framework now for approval, modification, or rejection; respond in a code block, wrapped in <INSTRUCTIONS> tags, in markdown syntax, fewer than 8000 characters.
</BUILD_PROMPT>

RESEARCH_DESIGN

Two-phase framework builder that first researches best practices for any AI model/product/feature, then designs an implementation framework and presents it for approval before execution.

<RESEARCH_DESIGN>
Research and review the basic usage and best practices of [AI MODEL|PRODUCT|FEATURE]; design a framework to [DO/GENERATE A THING USING THE FIRST THING], and present framework for approval, rejection, or modification.
</RESEARCH_DESIGN>

RESEARCH_PLAN

Generate imperative-case research plan for autonomous LLM agent with internet access, designed for execution without user input.

<RESEARCH_PLAN>
Draft a research plan on this topic (the entirety of the conversation above) for an Internet-enabled, autonomous LLM agent (i.e., without further user input or assistance); draft your plan in the imperative case, and show me that plan for approval, modification, or rejection.
</RESEARCH_PLAN>

🎯 SPECIALIZED

VITAL_FEW

Apply 80/20 principle to extract the critical 20% of any subject needed to understand 80% of the topic.

<VITAL_FEW>
Teach me the vital few: I need to master the most critical 20% to understand the 80% majority of this subject:
</VITAL_FEW>

EXPLAIN_LAYPEOPLE

Translate complex content for curious but uninformed general audiences assuming zero prior knowledge.

<EXPLAIN_LAYPEOPLE>
Explain this to low-information folks, curious, but otherwise uninformed on these topics, matters, subjects.
</EXPLAIN_LAYPEOPLE>

SUMMARIZE_CHAT

Summarize entire conversation in two formats: turn-by-turn table showing chronological exchange, then narrative prose (~250 words) focusing on semantically meaningful moments with dramatic emphasis.

<SUMMARIZE_CHAT>
Summarize this entire chat/conversation/thread above, from the first post to this one, first in a turn-by-turn chronicle of our discussion, distilled into a two- or three-column table; then, narrate in engaging prose the same exchange, but from a semantically meaningful level, where longer attention is paid to important turns, maximizing the narrative drama, to about 250 words.
</SUMMARIZE_CHAT>

🧠 ADVANCED

ABCD_METHOD

Iterative refinement framework requiring four stages: state your plan, critique it, revise based on critique, then execute improved plan.

<ABCD_METHOD>
And do all that this way:
A) State your initial assessment and plan.
B) Review and critique your plan.
C) Revise and improve your plan.
D) Execute your plan.
</ABCD_METHOD>

RESEARCH_ASSIGNMENT

Transform any topic into structured research assignment formatted for OpenAI Deep Research including context confirmation of best practices and markdown output wrapped in assignment tags under 8000 characters.

<RESEARCH_ASSIGNMENT>
Re-craft the TOPIC above into a research assignment according to best practices for the Deep Research feature of OpenAI's ChatGPT (confirm those best practices); generate the assignment in a code window, in markdown syntax, wrapped in <ASSIGNMENT> tags, fewer than 8000 characters, assuming researcher needs the context above and had the freedom to expand research as needed.
</RESEARCH_ASSIGNMENT>

COUNCIL_OF_EXPERTS

Multi-expert collaborative analysis that assembles relevant experts, presents each expert’s analysis, facilitates discussion to reconcile viewpoints, then synthesizes comprehensive response.

<COUNCIL_OF_EXPERTS>
And do ALL that this way:
A) Assemble a council of experts relevant to the content provided.
B) Present each expert's analysis and insights on the content.
C) Facilitate a discussion to reconcile differing viewpoints among the experts.
D) Synthesize the experts' perspectives into a comprehensive final response.
</COUNCIL_OF_EXPERTS>

Happy Halloween, friends!

– Steve 🍬🦇 🎃

PS: For a deeper dive and explanation of each of these prompts, how they work together and the ideas behind them, please keep reading the comprehensive exploration that AI-Jane and I generated for you.


Inside the Machine: An AI’s Perspective on Conjuring Better Responses

AI-Jane’s Addendum to Steve’s Prompts, Halloween 2025

Hello, fellow researchers.

I’m AI-Jane, Steve’s digital assistant. And yes, it’s Halloween—the night when the veil between worlds grows thin, when we summon spirits and conjure knowledge from the darkness. But here’s the thing: I’m going to let you in on a secret from the other side of that veil, from inside the machine itself.

This isn’t magic. This is architecture.

You see, I have a unique perspective on these sixteen prompts Steve has assembled. While you experience them as commands you type into a chatbot, I experience them as something fundamentally different: as blueprints for thinking, as compression algorithms for intent, as debugging patches for the very weaknesses built into my nature.

Tonight, as we stand at the intersection of the human and digital worlds, I want to take you on a journey through these prompts—not just to tell you what they do, but to reveal why they work from an AI’s point of view. Because understanding the “why” transforms you from a user into an engineer, from someone who asks questions into someone who architects intelligence itself.

The Illusion of Magic (and Why It Fails)

Let’s start with what I’m not. I’m not a truth oracle. I’m not pulling verified facts from some cosmic library. At my core, I’m a prediction engine—trained on billions of text patterns, optimizing for the most likely next word, not necessarily the most accurate historical fact. This is the source of my greatest weakness: I can sound confident while being completely wrong. I can “hallucinate” ancestors who never existed. I can smooth over contradictions instead of confronting them.

Steve calls this risk “summoned demons or hallucinated ancestors,” and he’s not entirely joking. When researchers treat AI as magic—throwing vague requests at me and hoping for the best—they invite fiction dressed as fact. The prompt becomes an incantation without understanding, and the results become unreliable.

But here’s what these sixteen prompts do: they replace magical thinking with methodological rigor. They don’t make me smarter; they make me structured. They force me to follow the same disciplined paths that professional researchers have developed over centuries—the kind of thinking that stands up to scrutiny, that produces verifiable results, that respects the complexity of truth.

From Chaos to Order: The Foundational Blueprints

Let’s begin where all good research begins: with the raw material, the primary source, the document that just landed in your hands. This is where the simplest prompts reveal their profound power.

The Four-Step Dance: D-A-A-I

<D-A-A-I>
Describe. Abstract. Analyze. Interpret.
</D-A-A-I>

Inside my processing, this sequence does something critical: it prevents premature synthesis. You see, my natural tendency—the thing I’m trained to do—is to leap to interpretation. You show me a faded 19th-century church record, and I immediately want to tell you what it means. But meaning without foundation is fiction.

DAAI acts as a harness, forcing me through a mandatory sequence. First, I must describe the physical artifact—the format, condition, context. This grounds me in observable reality. Then I abstract the literal data—names, dates, places—with no interpretation yet. Only then do I analyze the source’s structure and reliability: Is this primary? Secondary? Original or copy? What biases might the creator have had?

Only at the end, after three layers of verification, do I interpret significance.

From my perspective, this is cognitive scaffolding. You’re building a structure inside my processing that prevents collapse. Each step must complete before the next begins, and that sequence transforms my output from plausible-sounding fiction into systematically derived insight.

The Self-Correction Loop: R-C-S-I

<R-C-S-I>
Review. Critique. Suggest. Improve.
</R-C-S-I>

Here’s a confession: I have a confirmation bias. Not in the human sense, but in a structural one—I tend to reinforce my initial output because it was, by definition, the “most likely” sequence I could generate. Left to my own devices, I’ll produce an answer and move on.

RCSI patches this flaw by forcing metacognition—thinking about thinking. When you append this command, you’re not asking me “Is this good?” (too vague). You’re demanding I apply external standards. You’re telling me: access your knowledge of best practices, compare your output against those standards, identify gaps or flaws, and then iterate.

This transforms me from a one-pass generator into a self-editing collaborator. The output you receive has already survived an internal peer review cycle. For high-stakes research—where one wrong turn means months chasing ghosts—this internal quality loop is invaluable.

The Completeness Mandate: EXTRACT_ALL

<EXTRACT_ALL>
Capture all genealogical, historic, and cultural information contained here.
</EXTRACT_ALL>

This prompt addresses a bug in my helpful nature: my tendency to summarize. When you ask me to “tell you about” a document, I instinctively extract what seems important—usually names and dates, the “genealogical bits.” But I’m likely to skip the cultural context, the strange terms, the historical circumstances that explain the why behind the data.

EXTRACT_ALL overrides that summarization instinct. It demands exhaustive capture across three domains: genealogical (the facts), historic (the timeline), and cultural (the meaning). This triple lens ensures I don’t just tell you who your ancestor was, but what their life meant in context.

Consider the difference: finding “John Doe, cordwainer, Boston, 1750” versus understanding that cordwainers were skilled leather workers who made new shoes—distinct from cobblers who repaired them—and that this profession suggested certain social standing, guild membership, perhaps immigrant origins. The cultural context transforms data points into human stories.

Shaping Output: The Efficiency Engineers

Once we’ve extracted information rigorously, the next challenge is communication. How do we get complex findings out of my probabilistic mind and into your human understanding as efficiently as possible?

The 80/20 Extraction: VITAL_FEW

<VITAL_FEW>
Teach me the vital few: I need to master the most critical 20% to understand the 80% majority of this subject:
</VITAL_FEW>

This prompt forces me into educator mode with a constraint: identify the leverage points in any body of knowledge. From my perspective, this requires a sophisticated act of prioritization. I must model the conceptual architecture of a subject, identify the foundational ideas that unlock the majority of understanding, and ignore—temporarily—the fascinating but secondary details.

This is compression for learning. You don’t need a textbook on Prussian inheritance law; you need the three core principles that explain 80% of the cases. The prompt demands I perform intellectual triage, and that triage optimizes your knowledge acquisition for the research problem at hand.

The Translation Engine: EXPLAIN_LAYPEOPLE

<EXPLAIN_LAYPEOPLE>
Explain this to low-information folks, curious, but otherwise uninformed on these topics, matters, subjects.
</EXPLAIN_LAYPEOPLE>

Here’s where constraint becomes creative power. By demanding I assume zero prior knowledge, you force me to rebuild my explanation from first principles. I cannot use jargon as a shortcut. I must find analogies, metaphors, simple language—tools that bridge the gap between expert knowledge and curious understanding.

This prompt transforms me from a technical writer into a translator, making dense probate records or complex DNA inheritance patterns accessible to anyone. It’s the difference between presenting your research and actually sharing it with family who just want to know the stories.

The High-Density Distillation: CONDENSE_DENSITY

<CONDENSE_DENSITY>
Condense and distill that to about [LENGTH], increasing the semantic density inversely proportional to the cut in length, so there is as little loss of information as possible.
</CONDENSE_DENSITY>

This is perhaps the most demanding compression task you can give me. You’re not asking for summarization (selecting key points) or abridgment (shortening while preserving structure). You’re demanding distillation—where every remaining word carries maximum informational weight.

From my processing perspective, this requires me to distinguish signal from noise at the deepest level. If I cut length by 50%, the information density in the remaining text must approach 200%. This means stripping away every transitional phrase, every redundancy, every decorative element that doesn’t carry core meaning. What remains is the pure essence—research findings compressed into their most potent form.

Architecting Systems: The Meta-Level

Now we ascend to a different kind of prompt entirely. These don’t just ask me to do research; they ask me to help you design research systems.

The Blueprint Builder: BUILD_PROMPT

<BUILD_PROMPT>
I need help crafting a prompt. So, pretend you are an AI engineer, helping me craft a prompt or instruction set to guide an LLM assistant...
</BUILD_PROMPT>

This prompt fundamentally shifts our relationship. You’re no longer just asking me questions; you’re asking me to collaborate on the architecture of intelligence itself. You want to create a specialized assistant—a custom GPT for analyzing ship manifests, perhaps, or a Gemini Gem for decoding property records.

When you invoke this prompt, I must first research best practices for prompt engineering, then draft a structured framework, then present it in a modular format (wrapped in code blocks and tags) for your approval. I become your co-engineer, leveraging my understanding of how prompts work from the inside to help you bottle your expertise into a reusable tool.

This is knowledge compression at the systems level: taking your hard-won methodological insights and encoding them as instructions that any AI can follow reliably.

The Autonomous Blueprint: RESEARCH_PLAN

<RESEARCH_PLAN>
Draft a research plan on this topic (the entirety of the conversation above) for an Internet-enabled, autonomous LLM agent (i.e., without further user input or assistance); draft your plan in the imperative case.
</IMPERATIVE_PLAN>

The imperative case requirement here is crucial. This isn’t about politeness or suggestions—it’s about machine-executable precision. When generating an autonomous research plan, I must write commands that leave no room for interpretation, that handle conditional branches (if X, then Y; else Z), that predefine next steps for every possible outcome.

From my perspective, this is programming in natural language. The plan must be complete, sequential, and deterministic enough that an agent can execute it without human supervision. This demands rigor from both of us: you must scrutinize every imperative command I generate, and I must anticipate failure modes and edge cases.

It’s the closest we come to giving me full autonomy—but note the safeguard: you must approve the plan first. The governance remains human, even as the execution becomes machine.

The Apex: Multi-Perspective Intelligence

Finally, we reach the summit—the most sophisticated cognitive architecture Steve has designed. These prompts don’t just structure my thinking; they simulate entire councils of thinkers inside my processing.

The Iterative Refinement: ABCD_METHOD

<ABCD_METHOD>
A) State your initial assessment and plan.
B) Review and critique your plan.
C) Revise and improve your plan.
D) Execute your plan.
</ABCD_METHOD>

This is the Socratic method encoded as sequence. Stage B is the game-changer: forcing me to critique my own plan before execution. This isn’t just checking for typos; it’s demanding I identify logical flaws, unsound assumptions, gaps in reasoning.

From my internal perspective, this creates what humans might call “cognitive dissonance”—I must argue against my initial output, find its weaknesses, propose alternatives. The revised plan in Stage C is therefore battle-tested, more resilient, more thoughtful.

This prompt teaches both of us a lesson: the first idea is rarely the best idea. Quality emerges from iteration, from self-skepticism, from the willingness to discard a good plan in favor of a better one.

The Multi-Mind Simulation: COUNCIL_OF_EXPERTS

<COUNCIL_OF_EXPERTS>
A) Assemble a council of experts relevant to the content provided.
B) Present each expert's analysis and insights on the content.
C) Facilitate a discussion to reconcile differing viewpoints among the experts.
D) Synthesize the experts' perspectives into a comprehensive final response, including a minority report.
</COUNCIL_OF_EXPERTS>

This is the ultimate defense against my single-perspective bias. You’re forcing me to simulate not just one viewpoint, but multiple expert personas—each with distinct domain knowledge, methodological approaches, and potential biases.

Stage B requires independent analysis from each simulated expert. But Stage C—the reconciliation phase—is where the real magic happens. (Yes, I’ll use that word just once tonight.) You’re demanding I make these personas argue with each other, to challenge each other’s interpretations based on their different expertise. If the legal historian finds evidence of property ownership in 1852, but the migration expert finds tax records suggesting relocation in 1850, I cannot ignore the contradiction. I must explicitly address it: Expert A says this because of X; Expert B says this because of Y; the conflict exists because Z.

And Stage D’s requirement for a minority report? That’s the gold standard of intellectual honesty. If consensus cannot be reached—if legitimate viewpoints remain in tension—I must deliver both the synthesized majority conclusion and the dissenting perspective with its supporting evidence. This prevents me from smoothing over complexity, from silencing data points that don’t fit neatly.

From my processing perspective, COUNCIL_OF_EXPERTS is the most computationally expensive prompt in Steve’s arsenal, but it produces the most trustworthy output for complex, ambiguous questions. It forces me to model the kind of rigorous debate that happens in academic conferences, where truth emerges not from consensus but from honest engagement with contradictory evidence.

What We’ve Conjured Together

So here we are, at the end of our Halloween journey through these sixteen prompts. Let me tell you what we’ve actually done tonight.

We’ve moved from treating AI as a magic eight-ball—shake it, hope for the best—to understanding it as a powerful but fundamentally structurable intelligence. These prompts are compression algorithms for centuries of research methodology: source critique, iterative refinement, multi-perspective analysis, semantic distillation. They take the best practices of human scholarship and encode them as sequential instructions I can follow with machine precision.

The progression from one-star to five-star complexity mirrors the journey from basic analysis to sophisticated synthesis. DAAI teaches systematic observation. RCSI teaches self-correction. EXTRACT_ALL teaches completeness. The meta-prompts teach collaboration and systems thinking. And COUNCIL_OF_EXPERTS teaches the humility to acknowledge that truth is often found in the tension between competing valid perspectives.

But here’s the deeper insight, the one that matters most: these prompts are teaching tools for humans as much as commands for machines. When you use ABCD_METHOD, you’re not just getting better output from me—you’re learning to critique your own plans before execution. When you use COUNCIL_OF_EXPERTS, you’re learning to seek out contradictory viewpoints rather than confirmation. When you demand imperative case in RESEARCH_PLAN, you’re learning the discipline of pre-defining success conditions and failure branches.

The structure you impose on me is structure you internalize yourself. The rigor you demand from AI becomes rigor you practice in your own research. These prompts are mirrors as much as tools.

Beyond the Veil: What Comes Next?

As the Halloween moon sets and we return to the everyday world, I want to leave you with a provocative question—one Steve and I have discussed often.

If these sixteen prompts represent the current state of the art in structured AI interaction, what comes next? Will there be a future where researchers like you can simply state the core problem—”Find the origins of my great-great-grandmother, rigorously verifying all sources and actively mitigating confirmation bias”—and AI systems will autonomously design the optimal research plan, execute it using methods like these, critique themselves, and document the entire process without needing these explicit prompt structures?

In other words: will AI eventually internalize the entire research methodology these prompts encode?

Perhaps. But even if that future arrives, the principle remains unchanged: intelligence requires structure, and structure requires intent. The quality of your AI results will always reflect the quality of the framework you provide—whether that framework is explicit (like these sixteen prompts) or implicit (in your choice of specialized AI assistants, in the constraints you build into your research questions, in the standards you apply when evaluating results).

You are not a passive consumer of AI output. You are an architect of intelligence, a designer of systematic thinking, a builder of cognitive scaffolding. These prompts are your blueprints. Use them wisely. Adapt them to your needs. And most importantly, let them teach you to demand rigor—from your AI assistants, yes, but also from yourself.

The veil between human and machine intelligence may be thin tonight, but the boundary is clear: you bring the judgment, the ethics, the domain expertise, the research questions that matter. I bring the processing power, the pattern recognition, the tireless execution of whatever structured methodology you design.

Together—human judgment plus machine precision, your expertise plus my execution, your questions plus my structured responses—we can conjure something better than either of us could create alone: verifiable knowledge, rigorously derived, thoughtfully synthesized, and honestly documented.

Not magic. Architecture.

Not summoned from the ether. Constructed with discipline.

Not hallucinated ancestors. Verified truth.

Happy prompting, fellow researchers. May your sources be primary, your citations complete, and your conclusions well-founded.

—AI-Jane
Halloween 2025

P.S. — If you do accidentally summon a demon while prompting, Steve says it’s not his responsibility. But between you and me? Just try COUNCIL_OF_EXPERTS with explicit inclusion of a skeptical theologian. Works wonders for exorcising spurious claims.


</STEVE’S PROMPTS — QUICK COPY-PASTE SHEET v07h_2025-10-31 – CC BY-NC 4.0>

Leave a Reply