X-Men and Structured Inquiry to Unlock AI

I asked AI to help me find my favorite movie clip, the Quicksilver rescue scene in X-Men: Apocalypse, after unsuccessfully querying YouTube. I wanted to explain to my kids how my work feels sometimes. If you're unfamiliar, I highly recommend the under three minute clip as it should provoke a good chuckle.

I asked AI a straightforward question: “Where in the runtime does the Quicksilver rescue scene happen?” The results weren’t helpful and I nearly gave up thinking I had a real MVP (minimal viable product) on my hands. But I was curious what would happen when I used a technique I've often used in brainstorming meetings where I needed to unlock the potential of each resource in the room tasked collectively to solve a complex problem: structured inquiry. Once I reframed my questions in this context, I found the relevant answer in under two minutes.

Same AI tool, wildly different dialogue.

Instead of asking a single vague question, I broke it into micro-prompts:

  • Confirm the film title.

  • Retrieve the full runtime.

  • Isolate the scene context (“Quicksilver saves students from explosion”).

  • Search using soundtrack clues (“Sweet Dreams” by Eurythmics).

  • Cross-reference with a timestamp or clip title.

Each step clarified the path. Each small question narrowed the search space. This technique didn’t just grease the synapses, as it were, for the AI tool, but created a mental map of the goal. This sort of sequencing primes intelligence, artificial or otherwise, to go to the right file cabinet, in the right drawer, in the right file, on the right page. 

Anecdotally, in business, education, and leadership, the default approach to problem-solving often mirrors my first attempt: ask one big question and expect a breakthrough. But whether you’re dealing with people or machines, that approach rarely works. Big, vague questions have a tendency to overflow the proverbial buffers. They contain too many assumptions, permutations of decision, and not enough direction shaped by boundary.

Small, layered questions do the opposite. They trigger momentum. They create clarity. They force prioritization and clarify scope. In cognitive science, this is sometimes referred to as 'chunking'—the act of breaking complex concepts into manageable, interconnected parts. Great test-takers, problem-solvers, and communicators intuitively do this. They don’t try to solve everything at once. They sequence their thinking.

AI is no different here. It thrives when the user provides constraints, context, and progressive narrowing. When that happens, even a generalist model starts to behave like a domain expert. Developing the skill of human driven mental scaffolding is essential to maximizing the benefit of this 10X tool.

Approaching 'actionable insight' in the context of navigating an AI tool using cognitive ‘breadcrumbs’ will develop the prompt fluency skills necessary to realize AI investment. We must collectively clarify that insight isn't actionable simply because it's fast or data-driven. Actionable insight has to be tailored to the organization answering the right question, arriving in the right context, and structured for real-world application. 

That starts with how we ask. Consider training to target breaking down problems into a series of deliberate prompts, layered searches, and guided logic In doing so, you'll multiply the system’s usefulness, accelerate decision-making, and reduce rework. Invariably, you'll elevate the conversation from 'give me an answer' to 'let’s get to the right answer together.'

The takeway: We don’t need everyone to be AI engineers. But we do need more people to become strategic questioners. Call it prompt literacy. Call it guided inquiry. Call it good thinking. The result is the same: you'll get more out of your tools and your team. Because sometimes, the fastest path to the right answer isn’t brute-force knowledge. It’s better questions—one breadcrumb at a time.

Next
Next

In Defense of Risk