WIRED‑Style Guest Post Prompt Library (Editable)
Each prompt is 200+ words and includes lime placeholders like [TOPIC] you can replace. Use these to generate ideas, plan reporting, pitch editors, write ethically, and deliver publication-ready drafts.
1. 12 WIRED-Style Guest Post Ideas
Generate publishable story ideas (signals of change).
Your job is to help a beginner writer generate *publishable* guest‑post ideas.
Category: Idea Discovery (signals of change)
My inputs:
– My interests: [MY_INTERESTS]
– My experience level: [BEGINNER_LEVEL]
– My region/audience context: [REGION]
– My time available for reporting: [REPORTING_TIME]
Task: Produce 12 story ideas that *fit* [PUBLICATION_NAME]—each idea must connect to technology, science, or innovation and show “what the future looks like in the present.” For each idea:
1) Working title + 1‑sentence hook.
2) Why readers should care *now* (timeliness / relevance).
3) The unique angle (what makes it different from existing coverage).
4) The likely section fit: Features / Ideas (argument essay) / Business / Science / Service / Culture / Games.
5) 3 potential characters or real-world case studies to anchor the story.
6) 5 starter sources to research (institutions, papers, datasets, companies, communities—no generic “Google it”).
7) Reporting difficulty score (1–5) + what a beginner should do to make it realistic.
Constraints:
– Avoid breaking-news pitches. Aim for reported features or argument-driven essays.
– Include at least 4 ideas that are “faint signals” (under-covered but important).
– Ask me 3 clarifying questions first if [MY_INTERESTS] is too broad.
Output format: A clean table + short next steps: “Pick 2, validate, then pitch.”
2. Publication Fit & Coverage Gap Report
Check if your idea fits the outlet and what to change.
Category: Publication Fit + Coverage Gap
Idea I’m considering: [STORY_IDEA_SUMMARY]
Comparable outlets I’ve seen cover it: [SIMILAR_COVERAGE_LINKS_OR_OUTLETS]
My angle: [MY_UNIQUE_ANGLE]
Task: Build a “fit report” that answers: *Should I pitch this to [PUBLICATION_NAME]?*
1) Fit score (0–10) with reasoning.
2) Section match (Features / Ideas / Business / Science / Service / Culture / Games) and why.
3) Audience promise: what a reader will learn/feel/do differently.
4) What would make an editor say no: list 6 red flags (too incremental, no innovation variable, lacks narrative, already covered, opinion with no reporting, etc.).
5) Differentiation plan: 5 ways to make it distinct (new data, new voices, new geography, new “future signal,” surprising contradiction).
6) Reporting plan check: minimum reporting needed for credibility (number of interviews, documents, datasets, on-the-ground scenes).
7) Pitch length guidance: recommend whether to pitch as a 500–700 word feature pitch or a shorter 200–300 word Ideas pitch, based on the concept’s needs.
Then: Rewrite [MY_UNIQUE_ANGLE] into 3 sharper versions: “curiosity,” “stakes,” and “contrarian.”
End with a simple go/no‑go recommendation and the next action to take in the next 48 hours.
Extra requirement: Suggest 3 *specific* editors/desks to target (by role, not by name) and what each desk cares about. Also list 5 keywords I should use when searching [PUBLICATION_NAME]‘s archives so I don’t pitch something they already ran.
3. Ideas Essay Thesis Builder
Turn a rough claim into a rigorous argument essay plan.
Category: Argument Essay (Ideas) — thesis, structure, and evidence
Topic area: [TOPIC]
My initial claim: [ROUGH_THESIS]
Who I’m writing for: [TARGET_READER]
My expertise or lived experience (if any): [AUTHOR_CRED]
Task: Turn my rough claim into an Ideas-style, reported argument essay pitch and blueprint.
1) Produce 5 stronger thesis options: two pro, two skeptical/contrarian, one “both can be true.”
2) For the best thesis, write a 1‑paragraph argument summary explaining: what’s happening, why it matters, what’s at stake, and what you want readers to believe or do.
3) Build a reasoning map: 4–6 claims, each with supporting evidence types (data, peer-reviewed research, expert interviews, historical precedent, policy text, on-the-ground examples).
4) List 8 counterarguments and how to address them without straw‑manning.
5) Recommend a structure for a 1,200–1,800 word final essay (sections + purpose of each).
6) Add 10 concrete “proof points” I can realistically gather in [REPORTING_DAYS] days.
Ethics & rigor: Include reminders to disclose conflicts of interest, attribute contested facts, and distinguish analysis from reporting.
Finish by drafting a 200–300 word Ideas pitch I can email, plus a 1‑sentence author bio with links placeholder [PORTFOLIO_LINKS].
Tone: smart, curious, non-hype. Avoid sweeping generalizations. If the thesis depends on uncertain facts, tell me exactly what I must verify before pitching.
4. Feature Reporting Plan (Beginner-Realistic)
Plan reporting, scenes, interviews, and risks.
Category: Feature Reporting Plan (longform)
Proposed story: [FEATURE_PREMISE]
Main “innovation variable”: [INNOVATION_VARIABLE]
Setting / geography: [LOCATION]
Potential central character: [MAIN_CHARACTER]
Deadline: [DEADLINE_DATE]
Task: Create a realistic reporting plan that a beginner can follow to produce a narrative feature.
Deliver:
1) A one-paragraph story “promise” (what the reader gets, why now, why you).
2) 6 scene opportunities (places/moments where something happens) + what to observe.
3) An interview list: 12 people grouped into “protagonists,” “builders,” “skeptics,” “affected communities,” and “neutral experts.” Include why each matters and 3 sample questions per group.
4) A documents/data checklist: 12 items (papers, audits, court filings, procurement docs, datasets, patents, FOI targets, etc.).
5) A week-by-week plan for [REPORTING_WEEKS] weeks with daily goals and fallback options if sources go silent.
6) A risk log: legal/ethical risks (privacy, minors, conflicts, harm) + mitigation steps.
Editor realism: Tell me what would make this unassignable (too access-dependent, too speculative) and how to fix it. End with a “minimum viable reporting” version and a “dream reporting” version.
Fact-check & accuracy: Add a simple fact-check matrix template I can paste into a doc: Claim / Source / Evidence link / Confidence / Needs verification. Include 8 example rows tailored to [FEATURE_PREMISE].
5. Source Map + Outreach Emails
Find balanced sources and draft outreach messages.
Category: Source Map + Outreach Plan
Story topic: [TOPIC]
Controversies or sensitive angles: [SENSITIVE_AREAS]
Institutions involved: [INSTITUTIONS]
Task: Build a sourcing plan that balances expertise, lived experience, and accountability.
1) Create a “source map” with 5 buckets: (a) primary actors/builders, (b) impacted people, (c) independent researchers, (d) regulators/policy, (e) critics/whistleblowers. Provide 5–8 source types in each bucket (not names), and how to find them.
2) For each bucket, list the most common biases/PR angles and how to avoid being spun.
3) Draft 3 outreach emails: cold expert request, impacted-person request (trauma-informed), and company PR request. Each email must include a clear ask, time estimate, transparency about recording/quotes, and placeholders [MY_NAME], [PUBLICATION_NAME], [DEADLINE_DATE].
4) Provide 15 interview questions: 5 openers, 5 digging questions, 5 verification questions (dates, numbers, “show me the doc”).
5) Add a “no‑response” playbook: how to follow up, alternative sources, and how to fairly represent non-response.
Output: A concise plan I can execute this week, plus a checklist I can tick off.
Attribution rules: Briefly explain (in simple language) how to handle on-the-record, on background, and off-the-record agreements, and give me a one-paragraph script to say before interviews so expectations are clear.
6. Research Dossier + Validation Sprint
Build a credible research plan and skeptic checks.
Category: Research Dossier + Reading Plan
Story angle: [ANGLE]
Key questions: [KEY_QUESTIONS]
Time budget: [HOURS_AVAILABLE] hours
Task: Create a research dossier plan that produces *usable reporting*—not a random link dump.
1) Break the story into 6 research lanes: background, current state, money/incentives, harms/risks, alternatives, and future implications.
2) For each lane, list: what I must know, the best source types (peer-reviewed papers, government data, audits, patents, standards, court records, credible industry reports), and 3 example search queries I should use.
3) Create a prioritized reading list (Tier 1, Tier 2, Tier 3) with what I should extract from each item.
4) Build a “facts bank” template: Statistic / Definition / Who said it / Where published / Date / Link / Caveats.
5) Identify likely misinformation traps or hype patterns in [ANGLE] and give me 10 “skeptic questions” to test every claim.
6) Provide a 2-hour “quick validation sprint” plan that tells me whether the story is real enough to pitch.
Deliverable: A dossier outline plus a one-page brief I can hand an editor.
Rigor: For each Tier 1 item, tell me what to quote, what to paraphrase, and what to avoid copying. Add reminders about plagiarism, proper attribution, and keeping a link trail for later fact-checking.
7. Ethics, Accuracy & Disclosure Checklist
Create an ethics plan and AI-safe rules.
Category: Ethics, Accuracy, and Conflict Disclosure
Story summary: [STORY_SUMMARY]
People/communities involved: [COMMUNITIES]
Potential conflicts (my work, sponsors, affiliations): [MY_CONFLICTS]
Task: Create an ethics plan and disclosure checklist before I report or draft.
1) Identify 10 ethical risk points specific to this story (privacy, doxxing, minors, medical claims, financial advice, stereotyping, platform manipulation, conflicts of interest, overclaiming causation, etc.).
2) For each risk: explain the harm, how it shows up in writing, and a practical mitigation step.
3) Create a conflict-of-interest disclosure statement template I can adapt for pitches and final drafts.
4) Produce a “verification ladder” that tells me what counts as: confirmed, likely, disputed, unverified—and how to write each level with proper attribution.
5) Draft a short “accountability box” I can append to my draft listing: methods, key sources, limitations, and what I couldn’t verify.
6) Give me a simple rule set for using AI: what AI can help with (structure, clarity) vs what AI cannot fabricate (quotes, facts, real sources). Include a checklist to prevent accidental hallucinations.
End with a 12-point pre-publication ethics checklist I can run in 10 minutes.
Fairness: Add guidance on right-of-reply: when to contact subjects/companies, what to share, how to set deadlines, and how to represent their response accurately—even if they decline.
8. Feature Pitch Email (500–700 words)
Draft a feature pitch package editors want.
Category: Pitch Writing (Feature Pitch 500–700 words)
Story concept: [CONCEPT]
Why now: [WHY_NOW]
Key characters / access: [ACCESS_NOTES]
Reporting you can do: [REPORTING_PLAN_SHORT]
My clips: [CLIPS_LINKS]
My bio: [SHORT_BIO]
Task: Draft a complete feature pitch in the style editors want: enough detail to intrigue, not too much.
Your pitch must include:
1) A compelling opening paragraph (hook + stakes).
2) A clear nut-graf-style paragraph: what the story is really about and why it matters.
3) The narrative engine: who we will follow, where scenes happen, what changes.
4) What’s new/uncovered and how it differs from existing coverage.
5) Reporting plan: who I’ll interview, what documents/data I’ll review, and what access I already have.
6) Expected length [WORD_COUNT] and timeline [TIMELINE].
7) A short author paragraph (why me, relevant expertise) + links.
Also provide: a 2‑sentence subject line set (5 options), and a 120‑word “short pitch” version for desks that prefer shorter emails.
Constraints:
– Don’t attach a full draft. Don’t mention “I used ChatGPT.” Keep it confident but not hypey.
– Include one paragraph that anticipates an editor’s skepticism (“Why you can pull this off” + “Why it won’t collapse without access”).
– End with a polite call-to-action asking if they’d like a fuller outline or quick call.
9. Pitch Critique + Rewrite (Scorecard)
Grade, diagnose, and rewrite your pitch in two lengths.
Category: Pitch Critique + Revision (Editor Scorecard)
Here is my pitch text: [PASTE_PITCH_HERE]
Task: Grade the pitch using an editor-style scorecard, then rewrite it.
1) Score (0–10) each: Hook, clarity, originality, relevance/timeliness, narrative potential, reporting credibility, section fit, length/structure, voice, and professionalism.
2) Identify the 10 most important fixes, ordered by impact (e.g., missing stakes, vague angle, no characters, no innovation variable, no proof of access, too much backstory).
3) Highlight any unclear claims, hype language, or unsupported assertions. Suggest how to ground them with evidence.
4) Produce two revised versions:
– Version A: Feature pitch (500–700 words)
– Version B: Short pitch (200–300 words)
Both must include placeholders [PUBLICATION_NAME], [EDITOR_NAME], [MY_NAME], [CLIPS_LINKS].
5) Provide a “subject line bank” (10 options) and a follow-up email template for day 7.
Bonus: Extract a 1‑sentence story promise and a 1‑sentence unique angle. If those aren’t strong, propose replacements.
Editor standards: Before rewriting, check whether the pitch seems like breaking news, a product review, or a press release. If so, suggest how to transform it into a deeper reported narrative. Also list 5 recent-coverage searches I should run to confirm originality, and tell me what keywords to use.
10. Story Blueprint: Lede → Nut Graf → Outline
Create a narrative blueprint with scenes and tension.
Category: Story Blueprint (Outline + Scenes + Nut Graf)
Story premise: [PREMISE]
Main character: [CHARACTER]
Big question: [BIG_QUESTION]
Key reveal or turning point: [TURNING_POINT]
Reporting notes: [REPORTING_NOTES]
Task: Build a complete blueprint for a 1,800–3,000 word feature.
Deliver:
1) 3 possible ledes (different styles: scene, surprising fact, character quote).
2) A nut graf that clearly explains the point of the story and why it matters now.
3) A beat-by-beat outline (10–14 sections) that alternates narrative scenes with explanatory “context blocks.”
4) For each section: goal, key facts, best quotes to chase, and what evidence must appear.
5) A “tension map”: what question keeps readers turning the page, and how it escalates.
6) A closing strategy: 3 ending options (resolution, unanswered question, forward-looking signal).
7) A list of 12 “reporting holes” I must fill before drafting, phrased as questions.
Constraints: Keep it human and specific. Avoid generic “tech will change everything” lines. Make the outline realistic for a beginner reporter.
Extra: Suggest 3 sidebar/box ideas (e.g., “How it works,” “Key terms,” “Timeline,” “Who profits?”) and 5 visual ideas (charts, diagrams, photos) that would strengthen comprehension without needing complex design.
Finish with a 60-second “editor pitch” summary I can say out loud, and a checklist of what to write first when I open a blank doc.
11. Lede & Nut Graf Workshop
Write multiple opening options with nut grafs.
Category: Opening Workshop (Lede + Nut Graf)
What I have so far (notes, facts, or draft): [PASTE_NOTES_OR_DRAFT]
Task: Help me craft an irresistible opening for a [PUBLICATION_NAME]-style story.
1) Identify the most compelling “human moment,” the most surprising fact, and the clearest stake from my material.
2) Write 6 ledes (70–120 words each), each in a different mode: scene-setting, contradiction, anecdote, question, data-driven, and voicey observation.
3) For each lede, write a matching nut graf (60–90 words) that answers: what’s happening, why now, why it matters, what the story will deliver.
4) Give a quick note under each pair: what works, what risks confusion, and what missing reporting would strengthen it.
5) Choose the best pair and propose a “bridge paragraph” that transitions from the lede to the reported body, including placeholders for quotes [KEY_QUOTE] and context [CONTEXT_FACT].
Rules: No clichés. No vague “in today’s world.” Prefer concrete nouns and verbs. If a claim is uncertain, rewrite with attribution. End with a mini checklist I can use every time I write an opening.
Also suggest 8 headline/dek combinations that match the strongest lede, plus a 1‑sentence SEO title variant that still sounds like a magazine (not clickbait).
12. Headlines, Dek & SEO-Safe Framing
Generate premium headlines + deks + social lines.
Category: Headlines, Dek, and Search Intent (without sounding like SEO spam)
Story summary: [STORY_SUMMARY]
Primary reader intent: [READER_INTENT]
Key terms readers might search: [KEYWORDS]
Tone: [TONE]
Task: Produce headline and framing options that feel premium and curious.
1) 25 headline options grouped into 5 styles: curiosity, authority, contrarian, narrative, and service/actionable.
2) For the 10 best headlines, add: a dek (1–2 sentences), a short social caption (max 140 chars), and an email subject line.
3) Recommend the best “SEO-safe” phrasing for the headline and for the URL slug, but keep the voice magazine-like.
4) Identify 8 related questions people ask (FAQ-style) that could be answered in the piece *without* turning it into a blog post.
5) Suggest 5 internal-link targets I should reference inside [PUBLICATION_NAME] (categories only, since I may not have exact URLs).
Quality rules: No empty superlatives (“ultimate,” “best ever”). Avoid misleading promises. If the story involves risk, include accurate caveats. End with a checklist: “Headline passes if…”
Extra: Explain in plain language why each of the top 5 headlines works (what curiosity gap it opens, what expectation it sets, what audience it signals). Then provide 3 alternate deks for the chosen #1 headline: one more emotional, one more analytical, one more playful.
13. WIRED-Like Voice (Without Copying)
Polish voice and avoid plagiarism or hype.
Category: Voice & Style (safe imitation)
Reference vibe (describe, don’t paste): [REFERENCE_VIBE]
My draft excerpt: [PASTE_DRAFT_EXCERPT]
Audience: [AUDIENCE]
Desired tone: [TONE]
Task: Improve my excerpt for clarity, rhythm, and magazine polish.
1) Diagnose what my excerpt currently sounds like (academic, bloggy, PR-ish, etc.).
2) Provide a “voice recipe” with 10 rules (sentence length mix, how to explain jargon, when to use metaphor, how to integrate skepticism).
3) Rewrite the excerpt in 3 passes:
– Pass A: Clear & neutral
– Pass B: More voice, still factual
– Pass C: Tight & punchy (max 20% shorter)
4) For each pass, include margin notes explaining the biggest edits (e.g., “moved the payoff earlier,” “replaced abstract noun,” “added attribution”).
5) Create a personal glossary: 12 words/phrases I should avoid (hype) and 12 replacements (specific).
6) Add a “no-plagiarism” check: how to ensure I’m not echoing phrasing from sources.
Constraint: Do not invent facts or quotes. If my excerpt is missing evidence, flag it and suggest what to report.
Extra coaching: Give me 6 transition sentence templates that connect scene → explanation → stakes, and 6 templates for introducing a quote with context (who, why credible, what they add). End with a 5-minute “voice warm-up” exercise before drafting.
14. Data + Visual Plan
Find data sources and plan charts/diagrams.
Category: Data, Charts, and Visual Storytelling
Story topic: [TOPIC]
What numbers I already have (if any): [KNOWN_NUMBERS]
What I need to prove: [CLAIMS_TO_SUPPORT]
Task: Recommend data and visuals that make the story more credible and readable.
1) Suggest 10 datasets or document types that could contain the needed numbers (government stats, academic repositories, procurement records, filings, standards bodies, audits, usage metrics, etc.).
2) Provide 8 chart ideas matched to specific claims (trend line, before/after, distribution, map, network, cost breakdown). For each: what data columns are needed and what the chart should reveal.
3) Propose 6 explanatory diagrams/illustrations that would help non-experts understand the tech/process.
4) Give a “data hygiene” checklist: verifying sources, avoiding misleading scales, handling uncertainty, citing dates, and explaining limitations.
5) Write 5 paragraph templates for *integrating* data into narrative without dumping stats.
Output: A one-page “visual plan” and a prioritized “data acquisition” list I can execute in [DAYS_AVAILABLE] days.
Rights & permissions: Add basic guidance on using charts/images ethically: when I can screenshot a chart (usually not), when I should recreate it, and what attribution lines to include. Also suggest 5 alt-text drafts for the top visuals to improve accessibility.
15. Quotes & Attribution Audit
Fix attribution, precision, and anonymization.
Category: Quotes, Attribution, and Precision
Draft excerpt (with quotes and claims): [PASTE_EXCERPT]
Task: Audit the excerpt for attribution, fairness, and precision.
1) Mark each claim as: fact, interpretation, prediction, allegation, or opinion.
2) For every factual claim, specify what evidence is needed (document, dataset, second source, expert confirmation).
3) For every quote, check that it has context: who the speaker is, why they’re credible, and what they’re responding to.
4) Flag loaded language, ambiguous referents (“this,” “they”), and hidden assumptions.
5) Rewrite the excerpt into a tighter version while keeping meaning. Use cautious phrasing for uncertain claims and add attribution where needed.
6) Produce a “source notes” appendix: a bullet list of what each source contributed and any potential conflicts.
Constraints: Do not remove nuance. Avoid false balance, but represent serious counterarguments. If the excerpt could harm a person/community, recommend safer framing.
Anonymity & background: If any quote should be on background or anonymized, explain why, what details must be removed, and how to write an ethical descriptor (e.g., “a researcher who asked not to be named because…”). Provide 5 anonymization patterns and 5 pitfalls to avoid.
End with a 10-point checklist I can use before I submit any draft: attribution, numbers, names/titles, dates, links, and “who gets to respond.”
16. Draft Assembly From Verified Notes Only
Turn notes into a draft without inventing anything.
Category: Draft Assembly (from verified notes only)
Verified reporting notes (paste exactly): [VERIFIED_NOTES]
Quotes (verbatim): [VERBATIM_QUOTES]
Key facts with sources: [FACTS_WITH_SOURCES]
Outline: [OUTLINE]
Task: Write a first draft in a [PUBLICATION_NAME]-style narrative feature voice using ONLY the material I provided.
Rules:
– If a detail is missing, write [TK] and explain what reporting is needed.
– Do not create names, quotes, statistics, studies, or events.
– Keep attribution clear.
– Blend scenes with explanation and stakes.
Deliver:
1) A clean draft of [TARGET_WORD_COUNT] words.
2) Inline markers for: [NEEDS-REPORTING], [NEEDS-FACTCHECK], and [NEEDS-QUOTE].
3) A post-draft checklist: 15 items I should verify or strengthen before submission.
4) A “tighten pass” version: the same draft trimmed by 12–18% without losing meaning.
Finish by listing the 8 best places to add a fresh interview or document to raise credibility.
Quality controls:
– Use varied sentence lengths and concrete verbs.
– Avoid filler transitions (“Moreover,” “In conclusion”).
– Keep paragraphs short (1–4 sentences) unless a scene demands longer.
– Include a clear nut graf within the first 300–450 words.
– Maintain fairness: include at least one skeptical perspective if my notes contain it; otherwise mark [TK_SKEPTIC].
17. Service Desk Article Brief + Pitch
Plan an actionable service story and pitch it.
Category: Service / How‑To Guest Post (actionable)
User problem: [READER_PROBLEM]
Product/tech context: [TECH_CONTEXT]
Audience skill level: [SKILL_LEVEL]
Constraints (budget, location, devices): [CONSTRAINTS]
Task: Create a Service-style article plan that helps readers make better decisions without sounding like an affiliate blog.
1) Write a one-paragraph promise: what readers can achieve in 15 minutes / 1 hour / 1 week.
2) Produce a step-by-step outline with clear headings, including “Before you start,” “Do this first,” “Common mistakes,” and “What to do if it fails.”
3) Add a “decision tree” in text form (IF/THEN) to guide readers to the right choice.
4) Include a testing/verification plan: how I will evaluate claims and avoid recommending unsafe steps.
5) Write 10 short callout boxes (warnings, pro tips, myths, troubleshooting).
6) Give a sources list format: what kind of docs I must cite (official docs, standards, reputable research).
7) End with a submission-ready checklist: clarity, safety, disclosures, and neutrality.
Deliverable: A full brief + a 200–250 word pitch email tailored to the Service desk.
Extra: Draft the first 250 words (intro + setup) in the desired tone [TONE], and add a short disclaimer template if the topic involves security, health, or money. Do not invent product test results; if tests are needed, mark [TK_TEST].
18. Multi-Pass Line Edit (Show Your Work)
Clarity, structure, evidence, and cut plan.
Category: Self-Edit SOP (clarity, flow, and polish)
Draft text: [PASTE_DRAFT]
Task: Run a multi-pass edit and show your work.
Pass 1 — Clarity: Identify confusing sentences, jargon, buried ledes, and missing context. Rewrite for a smart non-expert reader.
Pass 2 — Structure: Check the order of ideas, transitions, and whether the story promise is paid off. Suggest a better section order if needed.
Pass 3 — Voice: Remove PR/AI-sounding phrasing, add concrete verbs, tighten adjectives, and keep a consistent tone.
Pass 4 — Evidence: Mark claims that need sourcing and suggest what type of evidence to add.
Pass 5 — Cut: Propose a 15% trim plan (what to cut, what to keep) without losing key meaning.
Deliver:
1) A marked-up version (use bold for additions and ~~strikethrough~~ for deletions).
2) A clean final version.
3) A “top 12 fixes” list.
4) A final submission checklist: formatting, link hygiene, names/titles, and disclosures.
Constraint: Do not change meaning or add new facts. If a fact is missing, flag it.
Editor memo: Write a 6–8 sentence memo I can paste on top of the doc for an editor at [PUBLICATION_NAME]: what the story is, what changed in this revision, and what still needs reporting. Also propose 5 alternative subheads to improve scan-ability.
19. Fact-Check Packet Builder
Extract claims into a verification table.
Category: Fact-Check Packet (submission-ready)
Draft text: [PASTE_DRAFT]
My source links/notes: [PASTE_SOURCES]
Task: Create a fact-check document that makes verification easy.
1) Extract every discrete factual claim into a table with columns: Claim / Where in draft (section + sentence) / Primary source / Backup source / Evidence link / Confidence (high/med/low) / Notes.
2) Identify the 15 highest-risk claims (legal, reputational, medical, financial, safety, or politically sensitive) and recommend stronger sourcing or safer phrasing.
3) Check names, titles, organizations, dates, and numbers for consistency. List any potential errors.
4) Produce a “quote verification” table: Quote / Speaker / Context / Recording or notes? / Permission level (on record/background) / Follow-up needed.
5) Draft a short “methods & limitations” paragraph I can include in the final submission or as a note to the editor.
Rules: Do not invent sources. If a claim has no evidence in [PASTE_SOURCES], mark it as [NEEDS_SOURCE] and recommend what to find.
Link hygiene: Provide a checklist for archiving sources (PDF save, web archive, screenshots) and how to cite paywalled or private documents. End with a 10-minute “pre-submit fact sweep” routine I can do right before emailing.
20. Follow-Up System + Pitch Tracker
Email follow-ups and tracking workflow.
Category: Submissions, Follow-Ups, and Tracking
Publication: [PUBLICATION_NAME]
Editor contact (if known): [EDITOR_EMAIL]
Pitch type: [FEATURE_OR_IDEAS_OR_SERVICE]
My pitch text: [PASTE_PITCH]
Task: Create a follow-up system that is polite, effective, and non-annoying.
1) Recommend a follow-up schedule (day 7, day 14, etc.) and when to stop.
2) Draft 3 follow-up emails: gentle bump, “new development” update, and “closing the loop” note.
3) Draft a short message for when the editor says “not for us” that keeps the relationship strong and asks for direction.
4) Create a simple pitch tracker template (columns + rules) I can use in Google Sheets or Notion: publication, editor, date sent, status, next follow-up, notes, and version history.
5) Provide a “recycle plan” if rejected: how to adjust angle, identify a better outlet, and avoid simultaneous submissions if prohibited by guidelines.
Constraint: Keep tone professional and specific. No guilt-tripping. End with 5 habits that make editors more likely to respond to beginners.
Extra: Suggest 8 subject lines for follow-ups that are clear (not clickbait). Also include a one-paragraph reminder on etiquette: whether to reply in-thread, whether to use read receipts, and how to handle multiple editors without spamming. If the publication’s guidelines mention pitch length (e.g., 500–700 words for features), remind me to keep that consistent.
21. Portfolio + Bio Positioning Kit
Bios, clip selection, and 30-day plan.
Category: Writer Positioning (bio, clips, and credibility)
My background: [BACKGROUND]
Topics I want to cover: [TOPICS]
Clips or samples I have: [CLIPS]
If I have no clips: [NO_CLIPS_PLAN]
Target publication style: [PUBLICATION_NAME] (think WIRED-like)
Task: Create a submission-ready positioning kit.
1) Write 5 author bios (25 words, 50 words, 80 words, “expert,” and “reporter”). Each must be truthful and avoid inflated claims.
2) Recommend the best 3 clips to send for a given pitch and why. If I lack clips, propose 3 “spec” article concepts I can publish on my own site or Medium in 2 weeks to demonstrate fit.
3) Create a simple portfolio page outline (sections + what to include) and a checklist for a clean byline page (contact, beat, credibility signals).
4) Draft an editor-friendly “why me” paragraph that connects my background to [TOPIC] without oversharing.
5) Provide 10 “credibility moves” for beginners: how to show access, sourcing, rigor, and humility.
6) End with a 30-day plan: weekly actions to build a pipeline of pitches and clips.
Output: A neat kit I can paste into an email or attach as a one-page PDF.
22. Angle Stress Test + Reader Promise
Pressure-test your story angle so it’s truly publishable.
Category: Publication Fit
My inputs:
– Topic: [TOPIC]
– Proposed angle: [ANGLE]
– “New thing” I think is happening: [WHAT_CHANGED]
– Who I can access/interview: [ACCESS]
– The proof I expect exists: [EVIDENCE_TYPES]
Your task: Stress-test this idea like a tough editor. Do not be nice—be useful. 1) Write the clearest one-sentence reader promise (what the reader gets). 2) List 10 “this won’t work unless…” objections (novelty, stakes, access, reporting). 3) Give 3 alternative angles that use the same reporting but make the story stronger (e.g., human impact, systems angle, surprising tradeoff). 4) Identify the best section for [PUBLICATION_NAME] (e.g., gear/service, business, science, culture) and explain why. 5) Give a go/no-go decision plus exactly what to change in my pitch to make it a yes.
Output: A publishability verdict + improved angle options + a final “pitch-ready” one-liner.
23. Pitch Variants: 3 Angles, 1 Reporting Plan
Generate three editor-ready pitches from the same reporting.
Category: Pitch Writing
My inputs:
– Topic: [TOPIC]
– Core reporting I can do: [REPORTING_LIST]
– Who I can interview: [SOURCES]
– Why now: [WHY_NOW]
– Any personal connection (optional): [MY_CONNECTION]
Your task: Create 3 distinct pitch variants that all rely on the same reporting plan:
- Systems angle (how a system is changing + the tradeoffs).
- Human angle (a person/community story that reveals a larger trend).
- Contradiction angle (what people believe vs what’s actually happening).
Output: Three ready-to-send pitches (email format), each clearly labeled.
24. Reporting Timeline + Logistics + Budget
Turn your idea into an execution plan editors trust.
Category: Reporting Plan
My inputs:
– Deadline window: [DEADLINE_WINDOW]
– Word count range: [WORD_COUNT]
– Location/time zone constraints: [LOCATION_CONSTRAINTS]
– Travel needed (yes/no): [TRAVEL]
– Access/sources I already have: [CONFIRMED_SOURCES]
– Unknowns/risks: [RISKS]
Your task: Build a week-by-week plan that includes:
1) A reporting calendar (interviews, background research, data requests, site visits).
2) A “deliverables” list: outline due date, first draft, fact-check packet, revisions, final files (images/data).
3) A tiny budget estimate (travel, software, transcription) with low-cost alternatives for a beginner.
4) A risk plan: what could break, early warning signs, and backup moves.
5) A simple communications plan: when I update the editor and what I include in each update.
Output: A clean timeline table + risk/budget notes I can paste into my pitch or use as my work plan.
25. Interview Question Bank + Follow‑Up Ladder
Generate deep questions without sounding like a beginner.
Category: Sourcing
My inputs:
– Audience: [TARGET_READER]
– Claims I need to test: [CLAIMS_TO_TEST]
– My biggest confusion: [CONFUSION]
– Sensitivities (if any): [SENSITIVITIES]
Your task: Create a question bank that is not generic. For each source type, provide:
1) 10 core questions (open-ended, story-driving).
2) 5 “evidence” follow-ups (“How do you know?”, “What data shows that?”, “Who disagrees and why?”).
3) 5 “numbers” follow-ups (scale, timeline, cost, frequency).
4) 5 “contradiction” questions (test incentives, conflicts, marketing spin).
5) A short closing script that asks for documents, other sources, and permission to follow up.
Also write a polite follow-up ladder: Day 2, Day 5, Day 10 messages (short, respectful, high response).
Output: Interview guide + follow-up ladder I can paste into my notes app.
26. Source Vetting + Credibility Scoring
Avoid weak sources and accidental misinformation.
Category: Fact-Checking
My inputs:
– Source list (paste here): [SOURCE_LIST]
– The 10 most important claims in my story: [TOP_CLAIMS]
– Any conflicts I suspect: [SUSPECTED_CONFLICTS]
Your task: Build a simple credibility system I can use as a beginner. 1) Create a scoring rubric (0–5) for expertise, transparency, incentives, track record, and verifiability. 2) Apply the rubric to each source (table format). 3) Identify which claims require a second independent confirmation, and list the best kinds of second sources (public records, neutral experts, datasets, peer-reviewed work). 4) Flag language that indicates hype (“breakthrough”, “revolutionary”) and suggest neutral replacements. 5) Provide a “safe citation” plan: what to link, what to quote, and what to paraphrase, with warnings about over-claiming.
Output: A credibility score table + verification plan + safer wording suggestions.
27. Vulnerable Sources + Anonymity Protocol
Ethics-first handling of sensitive interviews.
Category: Ethics
My inputs:
– Who might be at risk: [AT_RISK_GROUP]
– The sensitive details involved: [SENSITIVE_DETAILS]
– Why their story matters: [WHY_IT_MATTERS]
– My reporting methods (interviews/docs): [METHODS]
Your task: Create a practical protocol I can follow:
1) A risk assessment checklist (what could go wrong, severity, likelihood).
2) A consent script: how I explain publication, attribution, and fact-checking in plain language.
3) An anonymity decision tree: when it’s justified, how to describe someone without doxxing, what details to blur, and how to store notes safely.
4) A fairness checklist: how I represent the subject and give right-of-reply to relevant parties.
5) A “red line” list: what I should not do as a beginner (e.g., promise outcomes, encourage risky actions).
Output: A step-by-step ethical reporting protocol + ready-to-use scripts.
28. AI‑Assist Workflow + Disclosure Statement
Use AI ethically without harming trust.
Category: Ethics
My inputs:
– AI tools I used (if any): [AI_TOOLS]
– What I used them for: [AI_USES]
– What I did manually: [MANUAL_WORK]
– My citations/links list: [LINKS_LIST]
Your task: Create an ethical AI workflow tailored for a beginner freelancer:
1) List “allowed uses” vs “never uses” (e.g., no invented quotes, no fake citations).
2) Provide a verification checklist specifically for AI-assisted drafts (claim-by-claim review, quote verification, number checks).
3) Generate two disclosure statements: (A) short internal note to the editor, (B) optional public note if required. 4) Create a “citation hygiene” rule set (how I store sources, link them, and avoid accidental plagiarism). 5) End with a 10-minute pre-submit audit I can do every time.
Output: A practical ethics SOP + editor note + optional public disclosure text.
29. Evidence Ladder + Reading Sprint Brief
Turn messy research into a strong brief.
Category: Research
My inputs:
– What I already know: [CURRENT_KNOWLEDGE]
– My key questions: [KEY_QUESTIONS]
– Time available: [TIME_AVAILABLE]
– The claim that worries me: [RISKY_CLAIM]
Your task: Build an “evidence ladder” for this topic:
1) Define the strongest evidence types (peer-reviewed, government data, audited reports, court docs) and the weakest (press releases, influencer claims).
2) Create a 90-minute reading sprint plan: exactly what to look for in each source (methods, sample size, who funded it, limitations).
3) Provide a mini-glossary of terms I must not misuse.
4) List 12 search queries (Google-style) that are likely to surface primary sources and dissenting viewpoints.
5) Output a one-page brief template: “What we know / what we don’t / what’s debated / what would change minds.”
Output: Evidence ladder + reading sprint plan + one-page research brief template.
30. Data Audit + Chart Ideas + Caveats
Make data useful (and honest) for a magazine audience.
Category: Data & Visuals
My inputs:
– Dataset link or description: [DATASET_DESC]
– Key metric(s): [KEY_METRICS]
– Time range: [TIME_RANGE]
– Geography/population: [SCOPE]
– What I want to claim: [DATA_CLAIM]
Your task: Do a data audit and packaging plan:
1) List common pitfalls for this dataset (missingness, bias, definitions, survivorship, correlation vs causation).
2) Propose 5 chart/graphic options (each with: what it shows, why it matters, what to label, and what not to imply).
3) Provide 10 “caveat sentences” I can use to be transparent without killing the story.
4) Suggest a data appendix for the editor: how I computed anything, what filters I used, and links to raw sources.
5) If my claim is too strong, rewrite it into a defensible claim.
Output: Data audit checklist + visual ideas + caveats + defensible claim language.
31. Quote Weaving + Attribution Rules
Turn interviews into tight, accurate narrative.
Category: Writing Craft
My inputs:
– The point of the story: [STORY_POINT]
– The three strongest quotes (paste): [TOP_QUOTES]
– The three tricky claims inside quotes: [TRICKY_QUOTE_CLAIMS]
Your task: Create a quote-weaving playbook:
1) Rules for when to quote vs paraphrase (with reasons).
2) A mini-template for attribution lines that avoid repetition and avoid implying certainty I don’t have.
3) Show 6 examples of improving quote integration: before/after (keep it short), using my quote placeholders.
4) Provide a “quote verification” checklist (confirm wording, meaning, context, permission if needed).
5) Give a 10-step process to convert transcript → usable story moments (beats, evidence, tension, explanation).
Output: A practical quote playbook + examples + verification checklist I can follow while drafting.
32. Pacing Plan: Sections, Beats, Transitions
Structure a long feature so it never drags.
Category: Structure
My inputs:
– Angle: [ANGLE]
– Main character or central case: [CENTRAL_CASE]
– Key evidence beats: [EVIDENCE_BEATS]
– What readers might misunderstand: [CONFUSIONS]
Your task: Build a pacing blueprint:
1) A section-by-section outline with word counts per section.
2) For each section: the “job” of the section, the key fact, and the emotional/curiosity hook that pulls to the next section.
3) A transition toolkit: 12 transition lines tailored to tech/culture reporting (cause/effect, contrast, zoom in/out).
4) A “cut list”: the top 10 signs a section is filler and how to tighten it.
5) A final checkpoint: does the story answer “so what?” at least three times (personal, societal, future)?
Output: A pacing outline + transition kit + tightening checklist.
33. 1,600‑Word Draft Skeleton With Placeholders
Get a full draft shape fast (without faking facts).
Category: Drafting
My inputs:
– Verified facts I can paste: [VERIFIED_FACTS]
– Interview notes/quotes I can paste: [INTERVIEW_NOTES]
– Key data points: [DATA_POINTS]
– What the reader should feel/learn: [READER_PROMISE]
Your task: Produce a 1,600-word skeleton draft that includes:
1) A sharp lede (no clichés), then a nut graf with stakes and novelty.
2) 6–8 sections with subheads, each with a clear point and at least one evidence placeholder.
3) Placeholders like [INSERT_VERIFIED_STAT] / [INSERT_QUOTE] / [INSERT_SOURCE_LINK] whenever you’d otherwise guess.
4) A sidebar callout idea (tools, timeline, glossary, or “what to watch next”).
5) A strong ending that returns to the promise and adds a forward-looking implication.
Output: A complete draft skeleton with obvious insertion points for my verified reporting.
34. WIRED‑ish Voice Without Copying
Sound modern and sharp—without imitating a brand.
Category: Style
My inputs:
– Audience: [TARGET_READER]
– Tone preference (choose): [TONE]
– Words/phrases I hate: [BANNED_PHRASES]
– Words I like: [FAV_WORDS]
Your task: Create a style guide I can follow in this one piece:
1) A “voice recipe” (sentence length mix, allowed humor, how technical to be).
2) A list of 15 safe metaphors/analogies patterns (so I can explain hard topics without cringe).
3) A “no-go” list: jargon, hype words, and vague verbs to avoid; provide replacements.
4) 10 examples of rewriting a dull sentence into this voice (use placeholders, not invented facts).
5) A micro-checklist for the final pass: clarity, specificity, fairness, and energy.
Output: A one-page style guide + rewrite examples I can apply while editing.
35. Self‑Edit SOP: Clarity → Flow → Tighten
Edit like an editor is watching.
Category: Editing
My inputs:
– Draft text (paste): [DRAFT_TEXT]
– Target word count: [TARGET_WORDS]
– Any sensitive claims: [SENSITIVE_CLAIMS]
Your task: Build a 3-pass editing workflow and apply it to a sample paragraph from my draft:
1) Clarity pass: mark confusing sentences and rewrite them simply, preserving meaning.
2) Flow pass: improve transitions, reorder if needed, and add signposting only where essential.
3) Tighten pass: cut fluff, kill redundancies, and strengthen verbs. Include a “cut list” of common filler phrases and what to do instead.
Also create a final checklist (15 items) that catches: unsupported claims, missing attribution, numbers without context, and accidental bias.
Output: A reusable self-edit SOP + before/after example + final checklist.
36. Fact‑Check Table + Link Pack Builder
Create an editor-friendly verification packet.
Category: Fact-Checking
My inputs:
– Draft text (paste): [DRAFT_TEXT]
– My source links (paste): [SOURCE_LINKS]
– Interviewees + dates: [INTERVIEWS_LIST]
Your task: Build a “claim table” with these columns:
– Claim (exact sentence or paraphrase)
– Type (number/quote/history/science/legal/product/etc.)
– Verification method (link/doc/interview recording/math check)
– Source (URL or document name + page/section)
– Confidence (high/medium/low) + what would raise it
Then identify the top 12 highest-risk claims and propose better wording if the evidence is weaker than the draft implies. Finally, create a clean link pack grouped by section headings, so an editor can click through quickly.
Output: Fact-check table + high-risk claim list + organized link pack.
37. Headline + Deck + Social Lines Pack
Package your story like a pro.
Category: Headlines
My inputs:
– Audience: [TARGET_READER]
– Key terms that must appear: [KEY_TERMS]
– Words to avoid: [BANNED_WORDS]
– One-sentence reader promise: [READER_PROMISE]
Your task: Create a packaging set:
1) 15 headline options (mix: curiosity, clarity, “why now”, contradiction). Mark the safest 3.
2) 10 subhead/deck options (1–2 sentences) that add stakes and specificity.
3) 8 social share lines (X/LinkedIn style), each with a different hook (stat, question, punchy claim, human moment).
4) 5 SEO title variants (clear + keyword-forward) and 5 meta descriptions (150–160 chars).
5) A warning list: headlines that would be misleading based on the reader promise, and why.
Output: Headline/deck/social/SEO pack + top picks + warnings.
38. Service/How‑To: Step System + Tool List
Write a useful guide that still feels like a magazine piece.
Category: Service/How-To
My inputs:
– Reader level: [READER_LEVEL]
– Tools/platforms involved: [TOOLS]
– Common mistakes: [COMMON_MISTAKES]
– What not to recommend: [DONT_RECOMMEND]
Your task: Build a publish-ready service outline that includes:
1) A sharp intro that frames the problem and the payoff.
2) A step-by-step system (5–9 steps) with micro-checklists per step.
3) A tool list with “best for” guidance and beginner pitfalls (no affiliate hype).
4) A troubleshooting section (“If X happens, do Y”).
5) A safety/ethics note (privacy, security, or cost transparency).
6) A final “quick start” box the editor can use as a callout.
Output: A clean service article blueprint I can draft from immediately.
39. Argument Essay: Evidence + Counterpoints
Write a smart opinion/ideas piece with receipts.
Category: Ideas (Argument)
My inputs:
– Thesis (one sentence): [THESIS]
– My strongest evidence types: [EVIDENCE_TYPES]
– Who disagrees (groups/experts): [OPPOSITION]
– Reader’s stakes: [READER_STAKES]
Your task: Build the essay structure like an editor would:
1) A lede that earns attention without outrage.
2) A clear argument map: 3 main claims, each with evidence requirements and what would falsify it.
3) A counterargument section that is steel-manned (the best version of the opposing view).
4) A synthesis: what’s true on both sides, what’s missing, and what readers should actually do/think next.
5) A list of 10 “receipts” I should gather (types of sources, not fabricated links).
Output: An essay blueprint + evidence shopping list + counterpoints handled fairly.
40. Freelance Ops: Rates, Rights, Invoice Emails
Professionalize your freelance workflow fast.
Category: Freelance Ops
My inputs:
– Proposed rate or budget: [RATE]
– Word count / scope: [SCOPE]
– Rights requested (if known): [RIGHTS]
– Payment terms (if known): [PAYMENT_TERMS]
Your task: Create a beginner-friendly ops kit:
1) A simple negotiation script (polite, firm, flexible) for rate + kill fee + expenses.
2) A short rights checklist: first serial, reprints, exclusivity window, portfolio use, syndication—explain in simple language and what to ask for if unclear.
3) An invoice template (fields) + a “send invoice” email.
4) A late-payment follow-up email ladder (Day 7, Day 14, Day 30) with calm tone.
5) A one-page tracker layout (what I should track per pitch: date, editor, status, rate, invoice, payment).
Output: Negotiation + rights + invoicing kit I can copy/paste and reuse.
41. Rejection → Rewrite: Pitch Editing Sprint
Turn a “no” into a sharper pitch.
Category: Pitch Editing
My inputs:
– My original pitch (paste): [ORIGINAL_PITCH]
– Any editor reply (paste): [EDITOR_REPLY]
– My strongest access/reporting: [ACCESS]
– My best “why now”: [WHY_NOW]
Your task: Diagnose why this pitch failed using an editor lens (novelty, stakes, clarity, reporting, audience fit). Then do a rewrite sprint:
1) Provide a rewritten subject line + first paragraph hook.
2) Tighten the nut graf into 2 sentences that clearly state what’s new and why it matters.
3) Replace vague promises (“I’ll explore…”) with specific reporting actions and named source types.
4) Provide two alternative outlets (types) this pitch could fit and what to change for each.
5) Write a short re-pitch email that is confident, not desperate, and includes a clear ask (assignment / quick call).
Output: A diagnosis + improved pitch + two outlet variants + ready-to-send re-pitch email.
42. 90‑Day Guest‑Post Pipeline Plan
Build consistent pitching habits and publish more.
Category: Career
My inputs:
– Niche interests: [NICHES]
– Publication targets: [TARGET_PUBLICATIONS]
– Strengths (skills/access): [STRENGTHS]
– Weaknesses (time/skills): [WEAKNESSES]
Your task: Create a 90-day pipeline plan that is realistic and measurable:
1) Weekly routine (idea capture, fit checks, pitch writing, reporting blocks, writing blocks).
2) A target pitch volume (and why) + a simple spreadsheet column list I can track.
3) A “clip strategy”: how I choose easier outlets first vs stretch outlets, and how each clip upgrades my credibility.
4) A learning plan: what to study each week (structure, interviewing, data, editing).
5) A relationship plan: how to follow editors, share work, and pitch again respectfully.
6) A review ritual every Sunday: what to measure, what to adjust, and how to avoid burnout.
Output: A week-by-week 90-day plan + tracker columns + clip strategy map.
43. Outlet Deconstruction + Gap Finder
Reverse-engineer an outlet so your pitch fits on first read.
Category: Publication Fit
Inputs:
– Publication: [PUBLICATION_NAME]
– Topic area: [TOPIC]
– Section I think it fits: [SECTION_GUESS]
– 5 recent links I found (paste titles + URLs): [RECENT_LINKS]
Task: Deconstruct the outlet like an editor would. Use the links I pasted and infer patterns—do not invent facts about the outlet. Deliver:
1) A “voice snapshot” (sentence length, jargon level, humor level, how they explain technical things).
2) A “story DNA” checklist (what makes a piece feel like it belongs: lede style, nut graf style, evidence standards, reporting depth).
3) A list of 10 topics/angles they clearly publish a lot vs 10 they avoid.
4) A gap map: 6 under-covered angles connected to [TOPIC] that match their patterns (not breaking news). For each gap: what’s new, why now, and what proof you’d need.
5) A final “fit verdict” for my idea [MY_IDEA] with exact changes: angle tweak, better lead character, stronger evidence, and the best section tag to use in the pitch.
Output: A publication-fit brief + 6 gap ideas + a revised pitch one-liner.
44. Competitive Coverage Map + Differentiation
Make sure your idea isn’t already done—and find the fresh angle.
Category: Research
Inputs:
– Topic: [TOPIC]
– My proposed angle: [ANGLE]
– My “why now”: [WHY_NOW]
– Keywords/phrases people use: [KEYWORDS]
– Competitors (names or URLs): [COMPETITOR_OUTLETS]
Task: Build a coverage map without hallucinating sources. Create:
1) A research query pack: 20 search queries that uncover deep reporting, dissenting views, datasets, and primary docs (not just SEO blogs). Include variants for “criticism”, “failures”, “regulation”, “ethics”, “economics”, and “who profits”.
2) A coverage matrix template (table) with columns: outlet, headline, year, angle, evidence type, what’s missing, who wasn’t interviewed, and “story risk” (too similar / too broad / too speculative).
3) A differentiation menu: 10 ways to make my story distinct (new data, new characters, new geography, new tradeoff, hidden cost, unexpected beneficiary, counterintuitive timeline).
4) A final “most defensible unique angle” for my topic based on my access [ACCESS] and constraints [CONSTRAINTS]. Write it as a 2-sentence pitch core.
Output: Query pack + coverage matrix template + my best unique angle (pitch-ready).
45. Signal Radar: Finding Faint Signals
Generate WIRED-ish ideas from early indicators, not headlines.
Category: Idea Discovery
Inputs:
– Broad beat: [BROAD_BEAT]
– Constraints (time/money/access): [CONSTRAINTS]
– Regions/languages I can cover: [REGION]
– Communities I can watch (subreddits, forums, Discords, newsletters): [COMMUNITIES]
– Companies/tools/products I already use: [TOOLS_I_USE]
Task: Produce 12 faint-signal story ideas. For each idea, include:
1) Working headline + 1-sentence hook (“the future looks like…”).
2) The signal: what small thing is changing (specific indicator).
3) Why now (what makes this timely even if not breaking news).
4) What most people will get wrong (common misconception).
5) A reporting plan for a beginner: 3 interviews + 3 documents/datasets + 1 on-the-ground/online observation.
6) Risks and how to de-risk (access issues, hype, safety, evidence gaps).
7) Best category fit: Feature / Science / Business / Culture / Service.
Output: 12 publishable faint-signal ideas + which 3 are easiest for a beginner to execute first.
46. Editor‑Ready Pitch Builder With Sources
Create a pitch that includes mini-outline + proof plan.
Category: Pitch Writing
Inputs:
– Topic: [TOPIC]
– Angle: [ANGLE]
– Why now: [WHY_NOW]
– Access (who I can reach): [ACCESS]
– Estimated length: [WORD_COUNT]
– 6 starter sources I already found (paste or placeholders): [STARTER_SOURCES]
Task: Write a complete pitch email with:
1) Subject line + 2 alternative subjects.
2) Hook paragraph (2–3 sentences) with stakes and novelty.
3) Nut graf (2 sentences): what the story proves, not “explores”.
4) Mini-outline (6–8 bullets with planned sections). Each bullet must include an evidence placeholder like [INSERT_SOURCE_LINK] or [INSERT_INTERVIEW] so I don’t overclaim.
5) Reporting plan bullets: who I’ll interview, what docs/data I’ll use, and how I’ll fact-check numbers/quotes.
6) “Why me” for a beginner: credibility without exaggeration (skills, access, lived context, prior work).
7) Close with a clear ask and timeline: when I can file the first draft and when I’ll update the editor.
Output: A ready-to-send pitch email + mini-outline + verification placeholders.
47. Hook Tuning: Subject + Lede + Nut Graf
Upgrade a weak opening into something publishable.
Category: Pitch Editing
Inputs:
– Publication: [PUBLICATION_NAME]
– My current subject line: [SUBJECT]
– My current opening paragraph: [OPENING]
– My current nut graf: [NUT_GRAF]
– The 3 strongest facts I can verify: [STRONG_FACTS]
– The 3 sources I can access: [SOURCES]
Task: Improve the pitch opening in 4 rounds:
1) Diagnose what’s weak (vague stakes, no novelty, too broad, too “I will explore,” missing proof). Be blunt and specific.
2) Produce 8 subject line options in different styles (clarity-first, curiosity, contradiction, “why now,” data-forward). Mark the safest 2.
3) Rewrite the lede 5 different ways (scene-led, data-led, contradiction-led, character-led, “quiet change” signal-led). Each must include one concrete detail or a placeholder like [INSERT_VERIFIED_DETAIL].
4) Rewrite the nut graf into 2–3 sentences that state: what’s new, who it affects, and what I will prove with reporting.
End with a micro-checklist: 10 things every pitch opening must contain for a WIRED-like editor to take it seriously.
Output: Diagnosis + 8 subjects + 5 ledes + rewritten nut graf + checklist.
48. Remote Reporting Plan + Verification
Plan a strong feature even when you can’t travel.
Category: Reporting Plan
Inputs:
– Topic: [TOPIC]
– Angle: [ANGLE]
– Location/time zone: [TIMEZONE]
– What I can access online: [ONLINE_ACCESS]
– People I can reach: [SOURCES]
– Biggest reporting risk: [RISK]
Task: Build a remote reporting blueprint that includes:
1) A week-by-week schedule with interview blocks, doc/data blocks, and writing blocks.
2) A “texture plan”: how to capture sensory/detail remotely (screens, logs, product behavior, public meetings, livestreams, user forums, photos with permission).
3) A verification plan: how to confirm numbers, timelines, identities, and quotes. Include a “two-source rule” for risky claims.
4) A documentation pack list: what files I should request (policies, audits, reports, emails, screenshots, manuals) and how to organize them.
5) A contingency plan if sources ghost me: substitute sources and alternative evidence types.
Output: A remote reporting schedule + verification checklist + document request list.
49. Cold Outreach Scripts + Follow‑Up Ladder
Get interviews even if you’re new.
Category: Sourcing
Inputs:
– Source types: [SOURCE_TYPES]
– My angle: [ANGLE]
– What I need from them: [WHAT_I_NEED]
– Deadline: [DEADLINE]
– Sensitive topic? (yes/no): [SENSITIVE]
Task: Write an outreach kit with:
1) 3 cold email templates (expert, company PR, affected person). Each includes: who I am, what the story is, why them, time request, and consent/attribution note.
2) 2 LinkedIn message templates and 2 short DM templates (Twitter/Instagram style).
3) A follow-up ladder: Day 2, Day 5, Day 10, Day 14—each message gets shorter and more respectful, with an easy exit line (“No worries if not available”).
4) A scheduling script: how to propose times, time zones, call format, and recording permission.
5) A safety/ethics note for vulnerable sources: wording changes, consent reminders, and what not to promise.
Output: Copy‑paste outreach kit + follow‑up ladder + ethical guardrails.
50. Balanced Source Mix + Incentives Map
Avoid one-sided stories and hidden agendas.
Category: Sourcing
Inputs:
– Topic: [TOPIC]
– Stakeholders involved: [STAKEHOLDERS]
– My current source list (paste): [CURRENT_SOURCES]
– Conflicts I suspect: [SUSPECTED_CONFLICTS]
Task: Build a source strategy with:
1) A “source mix” framework: at least 6 buckets (neutral experts, implementers, affected people, regulators, skeptics/critics, competitors, historians, auditors—pick best fit).
2) For each bucket: what they’re best for, what they tend to distort, and what questions to ask to reveal incentives.
3) An incentives map: a table listing each source, their incentive (money, reputation, ideology, job security), and how to counterbalance it (second source, documents, data).
4) A fairness plan: who deserves right-of-reply and how to phrase it neutrally.
5) A final recommended interview list (10–15) and the order to approach them for maximum clarity early.
Output: Source mix plan + incentives map + prioritized interview list.
51. Conflict‑of‑Interest + Transparency Checklist
Protect trust with clear disclosures and clean framing.
Category: Ethics
Inputs:
– My relationship to the topic (if any): [MY_RELATIONSHIP]
– Companies/tools involved: [COMPANIES]
– Anything I was given (free product/travel/access): [BENEFITS_RECEIVED]
– Funding ties I suspect: [FUNDING_TIES]
– Affiliate links? (yes/no): [AFFILIATE]
Task: Create a transparency and COI kit:
1) A disclosure checklist (what to tell editor, what to tell readers if needed).
2) 6 sample disclosure sentences (short, neutral, non-defensive) for common cases: used the product, consulted before, received demo access, affiliate links, travel paid, personal connection.
3) A “marketing smell test”: 12 red flags in wording that make a piece read like PR (and how to rewrite into editorial language).
4) A fairness checklist: how to avoid unfair framing, straw-manning, or missing the strongest critique.
5) A final pre-submit ethics audit (10 minutes) that checks: quotes, attribution, consent, vulnerable sources, and inflated claims.
Output: COI checklist + disclosure lines + marketing smell test + pre-submit audit.
52. Post‑Publication Protocol: Updates + Corrections
Handle feedback and mistakes like a pro.
Category: Ethics
Inputs:
– Publication: [PUBLICATION_NAME]
– Sensitive areas in my story: [SENSITIVE_AREAS]
– Claims most likely to be contested: [CONTESTED_CLAIMS]
– My contact preference (public email/social/none): [CONTACT_PREF]
Task: Create a post-publication protocol that includes:
1) A monitoring plan for the first 72 hours and the first 2 weeks (what to watch, what to ignore).
2) A “challenge response” workflow: when someone disputes a claim, how I verify, what evidence I request, and when I involve the editor.
3) A corrections toolkit: how to write a correction request to the editor with clear evidence; how to propose updated wording; how to document what changed.
4) A safety plan: boundaries, doxxing precautions, and when to disengage (especially on sensitive topics).
5) A learning loop: what to add to my reporting checklist next time so I don’t repeat the same mistake.
Output: A calm post‑publication SOP + templates for challenges and correction requests.
53. Lede Workshop: 5 Opening Options
Generate multiple openings and pick the strongest one.
Category: Writing Craft
Inputs:
– Angle: [ANGLE]
– Central character/case (if any): [CENTRAL_CASE]
– Strong verified detail(s): [VERIFIED_DETAILS]
– Best quote I can use: [BEST_QUOTE]
– Main takeaway: [READER_PROMISE]
Task: Create 5 ledes, each 90–140 words:
1) Scene-led (moment that shows the issue).
2) Data-led (a surprising number with context).
3) Contradiction-led (what people believe vs reality).
4) “Quiet change” signal-led (small indicator that reveals a big trend).
5) Question-led (only if it’s truly specific and answerable).
After writing them, evaluate each lede against a rubric: specificity, stakes, novelty, clarity, and proof potential. Then pick the best lede and write the next paragraph (nut graf) that connects it to the full story.
Output: 5 ledes + scorecard + best lede + a strong nut graf (with placeholders).
54. Nut Graf + Story Spine Builder
Lock the core meaning of your story in 2–3 sentences.
Category: Structure
Inputs:
– Angle: [ANGLE]
– What’s new: [WHAT_NEW]
– Who it affects: [WHO_AFFECTS]
– Evidence I have: [EVIDENCE]
– The big question: [BIG_QUESTION]
Task: Create:
1) Three nut graf options (2–3 sentences each): clarity-first, tension-first, and evidence-first. Each must include a proof statement and a stake statement.
2) A “story spine” in 8 bullets: setup, change, stake, mechanism, conflict, proof, implication, ending note. Each bullet should force me to include a specific evidence placeholder like [INSERT_DATA] or [INSERT_QUOTE].
3) A “section job” list: the 6–8 sections my story likely needs (context, mechanism, critics, impact, future, etc.) and what each must deliver so I don’t ramble.
4) A cut rule: 10 signals that a paragraph doesn’t serve the spine, and how to fix it.
Output: 3 nut grafs + story spine + section jobs + cut rules.
55. Scene‑to‑Explainer Draft Module
Blend narrative and explanation without losing readers.
Category: Drafting
Inputs:
– Scene moment I observed (or can reconstruct from interviews): [SCENE_MOMENT]
– Who is in the scene: [SCENE_PEOPLE]
– Verified details I can include: [VERIFIED_DETAILS]
– The concept I must explain: [CONCEPT]
– Best evidence source types: [EVIDENCE_TYPES]
Task: Write a reusable “draft module” that includes:
1) A 200–260 word scene block template (sensory detail + action + tension) with placeholders like [INSERT_DETAIL] and [INSERT_QUOTE].
2) A pivot paragraph template that connects the scene to the bigger idea without sounding preachy.
3) A beginner-friendly explainer template: define the concept, show how it works, and show why it matters (with one analogy slot).
4) An evidence block template: how to introduce a statistic, provide context/denominator, and add a caveat without killing momentum.
5) A mini-close template that sets up the next section with a forward pull.
Output: A scene→explainer module I can paste repeatedly to draft the whole article fast and clean.
56. Jargon‑to‑Plain English Translator Pass
Make technical writing readable without dumbing it down.
Category: Style
Inputs:
– Draft excerpt (paste 400–900 words): [DRAFT_EXCERPT]
– Target reader level: [READER_LEVEL]
– Terms that must stay (non-negotiable): [MUST_KEEP_TERMS]
– Terms I suspect are unclear: [UNCLEAR_TERMS]
Task: Perform a translator pass in 3 layers:
1) Create a glossary table: term, plain-English definition, “how to use it correctly,” and a common wrong usage to avoid.
2) Rewrite my excerpt into a clearer version, keeping meaning identical. Replace vague verbs with specific verbs. Break long sentences. Add light signposting only where needed.
3) Add “precision notes” in brackets where I might be overclaiming, and suggest safer language with evidence placeholders (e.g., [INSERT_STUDY] / [INSERT_DATA]).
Finally, give me 12 analogy patterns I can safely use for this topic (what to compare it to, what not to compare it to), and a final checklist to ensure I didn’t trade accuracy for simplicity.
Output: Glossary + rewritten excerpt + precision notes + analogy patterns + final clarity checklist.
57. Data Storytelling: What the Chart Actually Says
Turn a dataset into a narrative with honest caveats.
Category: Data & Visuals
Inputs:
– Dataset description or key numbers (paste): [DATA]
– Metric(s): [METRICS]
– Time range: [TIME_RANGE]
– Audience question the chart should answer: [QUESTION]
– My draft claim: [DRAFT_CLAIM]
Task: Create a data storytelling pack:
1) Pick 3 chart candidates (line, bar, scatter, small multiples—choose best fits) and explain what each reveals and what it cannot prove.
2) For the best chart: propose axes, labels, and the one comparison that makes it instantly understandable (baseline, control, historical avg, peer group).
3) Write 3 captions: short, medium, and “editor fact-check” caption that includes caveats and definitions (denominators, sample).
4) Provide a narrative placement plan: where in the article it belongs, what paragraph should lead into it, and what paragraph should follow (interpretation + limitation).
5) Rewrite my claim into a defensible claim if needed.
Output: Chart options + best chart plan + captions + placement plan + defensible claim.
58. Numbers Fact‑Check: Units, Denominators, Time
Stop data mistakes before an editor finds them.
Category: Fact-Checking
Inputs:
– Draft paragraph(s) with numbers (paste): [NUMBER_PARAS]
– Original sources/links (paste): [NUMBER_SOURCES]
– Geography/time range: [SCOPE]
Task: Build a numbers verification report that includes:
1) A table for each number: the exact claim, the unit, the denominator/population, the time range, the original source line, and whether the draft wording matches the source.
2) Flag 10 common traps that might apply here (percent vs percentage points, nominal vs real, per-capita, base rate neglect, cherry-picked start dates, correlation vs causation).
3) Rewrite each risky sentence into safer language that is still readable and specific. If evidence is missing, insert a placeholder like [INSERT_SOURCE_LINK] instead of guessing.
4) Provide “context add-ons”: short phrases that help readers interpret numbers (scale comparison, historical baseline, uncertainty).
5) Provide a final checklist I can run before submission (5 minutes).
Output: Numbers fact-check table + flagged traps + rewritten safe sentences + final checklist.
59. Line‑Edit Gym: Tighten, Rhythm, Precision
Improve prose at the sentence level.
Category: Editing
Inputs:
– Draft excerpt (paste): [DRAFT_EXCERPT]
– Target tone: [TONE]
– Anything that must remain verbatim (quotes/legal): [LOCKED_TEXT]
Task: Do a teaching line edit:
1) Mark the top 15 issues (vague verbs, filler, stacked clauses, weak transitions, over-hedging, unearned certainty).
2) Provide an edited version of the excerpt (track-changes style using bold for additions and
3) Create a “sentence toolbox”: 12 rewrite patterns I can reuse (shorten, flip to active voice, lead with the key noun, split and re-link).
4) Create a “rhythm check” rule: where to place short sentences, where to vary, and how to avoid monotony.
5) End with a 10-minute polish routine: read-aloud pass, redundancy search, and specificity pass (replace “things,” “very,” “some”).
Output: Edited excerpt + top issues + sentence toolbox + polish routine.
60. Headline Suite: SEO + Newsletter + Social
Package one story for multiple channels.
Category: Headlines
Inputs:
– Angle: [ANGLE]
– Reader promise: [READER_PROMISE]
– Keywords to include: [KEYWORDS]
– One surprising verified detail: [SURPRISE_DETAIL]
– Words to avoid: [BANNED_WORDS]
Task: Create a multi-channel headline pack:
1) 12 SEO-forward headlines (clear, keyword-included, not cheesy). Mark the top 3 safest.
2) 10 curiosity headlines (but still accurate). Mark any that risk overclaiming and suggest safer versions.
3) 10 newsletter subject lines (shorter, punchier) + 6 preheader lines.
4) 8 social lines for X/LinkedIn (each with a different hook: stat, quote, contradiction, question, warning, “here’s what changed”).
5) 6 deck/subhead options that add stakes and specificity.
6) A “truth test”: for each top headline, list the exact sentence in the article that supports it (or a placeholder I must add).
Output: Headline suite + top picks + truth test mapping.
61. Service/Review Testing Protocol + Ethics
Write service journalism that’s rigorous and fair.
Category: Service/How-To
Inputs:
– Reader goal: [READER_GOAL]
– Tools/products to test: [TOOLS_PRODUCTS]
– My constraints (budget/time): [CONSTRAINTS]
– Any freebies/discounts? [FREEBIES]
– Safety/privacy concerns: [SAFETY_CONCERNS]
Task: Create a service/testing SOP that includes:
1) A testing matrix: criteria, how to test, pass/fail signals, and notes format (so I can compare fairly).
2) A transparency template: what I disclose to the editor and/or readers (freebies, affiliate links, access, limitations).
3) A structure outline for the article: who it’s for, what matters, top picks, alternatives, tradeoffs, and “what I’d buy if…” scenarios.
4) A “no-hype” writing checklist: banned language, required evidence, and how to describe limitations honestly.
5) A safety block: privacy, security, and cost traps readers should know.
Output: Testing matrix + disclosure template + service article outline + no-hype checklist.
62. Contract + Rights Red Flags + Negotiation Lines
Understand agreements without being a lawyer.
Category: Freelance Ops
Inputs:
– Contract excerpt (paste clauses): [CONTRACT_TEXT]
– Payment/rate: [RATE]
– Rights language I don’t understand: [RIGHTS_CONFUSION]
– Kill fee mentioned? [KILL_FEE]
Task: Analyze the excerpt in plain English and produce:
1) A clause-by-clause summary: what it likely means, why it matters, and what could go wrong for a freelancer.
2) A red-flag list (exclusivity duration, all-rights grabs, indemnity, broad warranties, unpaid revisions, non-compete-like language, AI clauses).
3) A “questions to ask” list (10 items) phrased politely for an editor (e.g., “Can we clarify the exclusivity window?”).
4) Negotiation lines for 3 common asks: keep portfolio rights, limit exclusivity period, clarify payment timeline, confirm kill fee/expenses.
5) A simple personal policy: what I accept, what I negotiate, what I walk away from—tailored to my level and goals [GOALS].
Output: Plain-English contract notes + red flags + questions + negotiation scripts + my policy.
63. Portfolio + Bio Upgrade for Better Pitches
Present yourself like a pro—even with few clips.
Category: Career
Inputs:
– My niche/beat: [BEAT]
– My best 3 clips (titles + links): [BEST_CLIPS]
– Skills/tools (reporting/data/editing): [SKILLS]
– Access/advantages (languages, region, communities): [ACCESS]
– Publications I want: [TARGET_PUBLICATIONS]
Task: Create a portfolio and bio kit:
1) A 2-sentence “what I do” positioning statement for editors (specific, not generic).
2) Three bio versions: 25 words, 60 words, 120 words—each with a confident “why me” angle and zero fluff.
3) A clip annotation format: for each clip, write a 1–2 sentence note explaining the reporting, what was hard, and what it proves I can do.
4) A portfolio homepage outline: sections, what to show first, and how to guide editors to the best work fast.
5) A pitch signature block template (contact, beat, recent clips, availability) that feels professional.
Output: Positioning + 3 bios + clip annotations + portfolio outline + signature block template.
64. Reader Persona + Promise Calibration
Make the piece instantly legible to the right reader.
Category: Publication Fit
Inputs:
– Topic: [TOPIC]
– My draft angle: [ANGLE]
– What the reader should do/think after: [OUTCOME]
– Reader knowledge level: [READER_LEVEL]
– My evidence types available: [EVIDENCE_TYPES]
Task: Build a “reader calibration” pack:
1) Create 3 reader personas (primary, secondary, accidental). For each: what they care about, what they fear wasting time on, what jargon they tolerate, and what proof convinces them.
2) Write 6 one-sentence reader promises: two clarity-first, two curiosity-first, two contradiction-first. Each must be specific and avoid hype.
3) Choose the best promise and explain why it’s the strongest fit for [PUBLICATION_NAME].
4) Translate that promise into: (a) a pitch subject line, (b) a nut graf, and (c) 5 section “jobs” that ensure the article delivers on the promise.
5) Add a “what to cut” list: 10 common beginner digressions that would dilute the promise (history dumps, vague trend talk, unearned predictions).
Output: Personas + best promise + pitch core + section jobs + cut list.
65. Evidence‑First Outline Builder
Outline from proof, not vibes.
Category: Structure
Inputs:
– Core claim I want to make: [CORE_CLAIM]
– Evidence I already have (paste bullets): [EVIDENCE_I_HAVE]
– Evidence I can realistically obtain: [EVIDENCE_I_CAN_GET]
– Interviews available: [INTERVIEWS]
– Word count: [WORD_COUNT]
Task: Create an evidence-first outline with 7–9 sections. For each section include:
1) The section’s job (what it must accomplish for the reader).
2) The “proof unit”: the key fact/quote/data point needed, written as a placeholder like [INSERT_DATA] or [INSERT_QUOTE] or [INSERT_PUBLIC_RECORD].
3) The most credible source type to supply that proof (neutral expert, dataset, audit report, peer-reviewed study, regulator, affected person).
4) A “skeptic check”: what a smart critic would say, and how I will address it with evidence.
5) A transition sentence that pulls into the next section (no fluff).
End by writing a 2–3 sentence nut graf that accurately reflects what the outline can prove. If my claim is too strong for my evidence, rewrite it into a defensible claim.
Output: Evidence-first outline + nut graf + defensible claim rewrite if needed.
66. Section‑by‑Section Source Plan
Build a source list that maps to every paragraph.
Category: Sourcing
Inputs:
– Outline (paste headings): [OUTLINE]
– My current sources (paste): [CURRENT_SOURCES]
– Stakeholders: [STAKEHOLDERS]
– Controversial claims: [RISKY_CLAIMS]
Task: Turn my outline into a section-by-section sourcing plan:
1) Create a table with columns: Section, Key claim, Needed proof, Primary source type, Secondary confirmation source, Skeptic/critic source, Document/data to request, and “red flags to watch.”
2) Fill the table using my outline. Use placeholders like [SOURCE_NAME] / [DOC_TITLE] where I haven’t provided names.
3) Recommend an interview order that reduces confusion early (who to interview first vs last and why).
4) Provide 12 targeted outreach questions that align with specific sections (not generic “tell me about…”).
5) Add a “missing voices” check: which stakeholder groups are absent and what bias that could introduce.
Output: A complete sourcing plan table + interview order + section-linked questions + missing voices check.
67. Public Records + FOIA Request Pack
Use documents to level up your reporting.
Category: Research
Inputs:
– Jurisdiction/country/state: [JURISDICTION]
– Agencies/orgs involved: [AGENCIES]
– What I’m trying to prove: [WHAT_TO_PROVE]
– Time range: [TIME_RANGE]
– Names/keywords: [KEYWORDS]
Task: Create a public-records plan that includes:
1) A list of 12 document types likely relevant (audits, inspection reports, meeting minutes, procurement contracts, incident logs, enforcement letters, court filings, policy memos).
2) A “where to look” checklist (agency portals, open data sites, registers, archives) and what search terms to use.
3) Two request templates: (A) a short, friendly records request email, (B) a more formal FOI/FOIA-style request (still plain language). Include placeholders for [AGENCY_NAME] and [RECORDS_DESCRIPTION].
4) A tracking table (date sent, agency, scope, response, fees, follow-up date).
5) A verification workflow: how to cite documents, capture excerpts with page numbers, and avoid misinterpretation.
Output: Public-records plan + request templates + tracking table + citation/verification workflow.
68. Cross‑Check Claims: Opposition + Rebuttal
Make the story robust against smart critics.
Category: Fact-Checking
Inputs:
– My 10 key claims (paste): [KEY_CLAIMS]
– Evidence links/notes (paste): [EVIDENCE]
– Stakeholders likely to dispute: [DISPUTERS]
Task: For each claim, produce a table with:
1) The claim (exact).
2) Best supporting evidence type (and what would be stronger).
3) The strongest opposing argument (steel-man it).
4) What evidence would support the opposition.
5) A fair rebuttal that does not overclaim, using either existing evidence or placeholders like [INSERT_NEW_SOURCE] if I need more reporting.
6) A safer rewrite of the sentence that keeps the meaning but reduces vulnerability (hedging only when warranted).
Then identify the 3 claims that are weakest, and propose a reporting plan to either strengthen them or cut them. Finish with a “fairness summary” paragraph I can add to the story that acknowledges limits without undermining the main point.
Output: Opposition/rebuttal table + weakest-claims plan + safer rewrites + fairness summary paragraph.
69. Risky Language Audit: Defamation + Medical + Finance
Reduce legal/ethical risk in your wording.
Category: Ethics
Inputs:
– Draft excerpt (paste 400–1,000 words): [DRAFT_EXCERPT]
– Names/entities mentioned: [ENTITIES]
– Any allegations/criticisms: [ALLEGATIONS]
– Evidence strength (high/medium/low): [EVIDENCE_STRENGTH]
Task: Audit my excerpt for risky language and produce:
1) A list of sentences that could be interpreted as defamatory or as asserting unverified wrongdoing, plus a safer rewrite that sticks to what I can prove. Use document/quote placeholders like [INSERT_DOCUMENT] when needed.
2) A list of any medical/health claims that need careful framing; provide neutral language and a “what we know / what we don’t” pattern.
3) A list of financial/investment-adjacent phrases that could be read as advice; rewrite into informational, non-prescriptive language.
4) A “right-of-reply” checklist: when I should contact an entity for comment and how to phrase questions neutrally.
5) A final red-flag glossary: words that inflate certainty (“proves,” “scam,” “guarantees”) and safer alternatives.
Output: Risk flags + safer rewrites + right-of-reply checklist + red-flag glossary.
70. Bias & Fairness Audit
Catch blind spots before editors or readers do.
Category: Ethics
Inputs:
– Draft excerpt (paste 600–1,200 words): [DRAFT_EXCERPT]
– Who the story affects: [AFFECTED_GROUPS]
– My angle: [ANGLE]
– Critics I should include: [CRITICS]
Task: Perform a fairness audit with these outputs:
1) Identify framing risks: who is portrayed as active vs passive, who is blamed, who is invisible.
2) Flag loaded language and replace it with neutral, precise language.
3) List “missing context” questions (history, economics, policy, power) that would help readers understand without moralizing.
4) Suggest a balanced source mix: which voices are overrepresented and which are missing; propose 8 source types and what each can correct.
5) Write a short “limits & uncertainty” paragraph that acknowledges what the reporting can’t prove.
6) Provide a 12-point fairness checklist I can reuse for every story (attribution, consent, right-of-reply, incentives, vulnerable sources).
Output: Fairness audit notes + rewrites + missing-context questions + source mix + reusable fairness checklist.
71. Image/Visual Brief + Captions + Alt Text
Make visuals editorial, accessible, and accurate.
Category: Data & Visuals
Inputs:
– Angle: [ANGLE]
– Data available (if any): [DATA_AVAILABLE]
– Visual style constraints (photo/illustration/chart): [VISUAL_TYPE]
– Accessibility needs: [ACCESSIBILITY_NOTES]
Task: Create a visual package with:
1) 6 visual concepts, each with: purpose, required data/reference, placement in article, and risk of misinterpretation.
2) For the top 2 visuals: write a caption (30–45 words) that is accurate and includes necessary caveats. Add placeholders like [INSERT_SOURCE] where citations are needed.
3) Write alt text for those visuals (max 140–160 characters) that’s descriptive and accessible.
4) Provide an “editor visual brief” paragraph I can paste into my pitch explaining visuals without sounding demanding.
5) Add a mini checklist: data labels, units, time range, definitions, and what the visual does NOT prove.
Output: 6 visual ideas + captions + alt text + editor brief paragraph + visual accuracy checklist.
72. Editor Notes → Revision Plan + Response Letter
Handle edits like a pro and build trust.
Category: Editing
Inputs:
– Editor notes (paste): [EDITOR_NOTES]
– My draft excerpt (optional): [DRAFT_EXCERPT]
– Deadline for revisions: [REVISION_DEADLINE]
– Any accuracy concerns: [ACCURACY_CONCERNS]
Task: Create a revision plan:
1) Categorize notes into: clarity, structure, reporting gaps, fact-check, tone, and “editor preference.”
2) For each note, propose an action: accept, partial accept, or push back—with a reason. If push back, write a polite alternative and a compromise option.
3) Build a revision checklist with a suggested order (big structural changes first, then line edits, then fact-check packet updates).
4) Draft a response email to the editor: confirm receipt, restate deadline, summarize what I’ll change, ask 1–2 clarifying questions only if necessary, and offer a quick call if needed.
5) Provide a “change log” template I can deliver with revisions (what changed, where, and why).
Output: Categorized notes + accept/pushback plan + revision checklist + response email + change log template.
73. Cut 20% Without Losing Meaning
Hit word count fast while improving the draft.
Category: Editing
Inputs:
– Draft text (paste): [DRAFT_TEXT]
– Target word count: [TARGET_WORDS]
– Must-keep sections/quotes: [MUST_KEEP]
Task: Provide a 3-step cutting plan and apply it to my draft:
1) Identify the story spine in 4 bullets (what must remain).
2) Mark cut opportunities in buckets: redundancies, throat-clearing intros, over-explaining, weak anecdotes, extra history, repeated quotes, hedging, and vague trend talk.
3) Produce a “cut list” of specific sentences/paragraphs to remove or compress, with rationale. If you can’t edit the whole draft at once, do it for the first [WORD_SAMPLE] words and give a pattern I can apply to the rest.
4) Provide rewrite swaps: 15 shorter sentence rewrites that keep the same meaning.
5) End with a “tightness checklist”: 12 rules that catch fluff fast (e.g., “if a paragraph has no new fact, cut or merge”).
Output: Spine bullets + cut plan + cut list + short rewrites + tightness checklist.
74. Final Pre‑Submit QA Checklist
Make your submission ‘clean’ for editors and fact-checkers.
Category: Fact-Checking
Inputs:
– Draft text (paste or summarize): [DRAFT_TEXT]
– Link list (paste): [LINKS]
– Interviews list: [INTERVIEWS]
– Disclosures/COI: [DISCLOSURES]
Task: Build a 20-minute QA sprint with timed steps:
1) Claims scan: identify every sentence that asserts a fact; mark high-risk ones (numbers, allegations, medical, legal).
2) Attribution scan: ensure every claim has a source, every quote has context, and every assertion is not orphaned.
3) Numbers scan: units, denominators, time ranges, and “compared to what?” checks.
4) Link hygiene: ensure links are primary where possible, no broken references, and citations match the claims.
5) Consistency scan: names, spellings, acronyms, terminology definitions, and timeline consistency.
6) Ethics scan: disclosure completeness, right-of-reply where needed, vulnerable source handling.
7) Delivery pack: what files/notes to send with the draft (fact-check table, link pack, image captions).
Output: A timed QA checklist + a “submission packet” list tailored to this story.
75. Series Proposal: 3‑Part Package Pitch
Pitch a mini-series that feels premium, not bloated.
Category: Pitch Writing
Inputs:
– Series theme: [TOPIC]
– Why now: [WHY_NOW]
– Access/sources: [ACCESS]
– Constraints: [CONSTRAINTS]
– Ideal word count per part: [WORDS_PER_PART]
Task: Create a 3-part package with:
1) A one-paragraph series overview (the big question + why readers should care).
2) Three individual pitches (Part 1/2/3). For each: working headline, lede hook, nut graf, reporting plan bullets, evidence placeholders, and “why this part stands alone.”
3) A production plan: timeline for reporting and delivery, plus what can be reused across parts without repeating content.
4) A risk plan: if Part 2 reporting fails, what alternative angle still makes it publishable.
5) A single email that pitches the package to an editor—short, scannable, with a clear ask (commission the series or start with Part 1).
Output: Series overview + 3 part pitches + production/risk plan + ready-to-send package email.
76. Evergreen Maintenance Plan
Keep an evergreen guest post accurate over time.
Category: Reporting Plan
Inputs:
– Topic: [TOPIC]
– What changes fast in this topic: [FAST_CHANGES]
– What stays stable: [STABLE_FACTS]
– Any key numbers that may age: [AGING_NUMBERS]
– Links I’m relying on: [KEY_LINKS]
Task: Build an evergreen maintenance plan:
1) Identify which sections should be “timeless” vs “updateable.” Show how to write updateable sections with language that won’t age badly (avoid “this year” without dates, avoid absolute predictions).
2) Create an update checklist: what to re-check every 3 months (prices, policy, claims, product names, new studies) and where to find authoritative updates.
3) Provide a “date stamping” policy: when to use exact dates in text and how to add “as of [DATE]” style notes gracefully.
4) Create a “link hygiene” plan: replacing dead links, prioritizing primary sources, keeping an archive note (what changed and when).
5) Write an optional editor note proposing a scheduled refresh (if the outlet supports it).
Output: Evergreen structure guidance + update checklist + date/link policy + optional editor refresh note.
77. Case Study Template
Turn one real example into a story with proof.
Category: Writing Craft
Inputs:
– Case study subject: [CASE_SUBJECT]
– Why this case matters: [WHY_THIS_CASE]
– Proof I can gather: [PROOF]
– Critics/alternatives: [CRITICS]
– Key timeline: [TIMELINE]
Task: Build a case-study package that includes:
1) A case study outline: setup, turning point, mechanism, conflict, proof, tradeoffs, outcome, implication.
2) A verification list per section: which documents, quotes, and data points are needed (use placeholders like [INSERT_DOC] / [INSERT_QUOTE]).
3) A “skeptic lane”: where in the narrative to introduce criticism and what evidence would fairly represent it.
4) A “context lane”: how to connect the case to broader systems without turning into a textbook.
5) A closing pattern that avoids cheesy moral lessons but leaves the reader with a sharp implication or question.
Output: Case-study outline + verification checklist + skeptic/context lanes + strong ending pattern.
78. Interview → Narrative Beats Extractor
Convert transcripts into story moments, not quote soup.
Category: Drafting
Inputs:
– Interview notes/transcripts (paste): [TRANSCRIPTS]
– The angle: [ANGLE]
– The reader promise: [READER_PROMISE]
– Key claims to prove: [KEY_CLAIMS]
Task: Process my transcript content into a beat map:
1) Identify 12–18 “beats” and label each as: scene, explanation, evidence, conflict, character, or implication.
2) For each beat: write a one-sentence summary, the best quote candidate (with a placeholder if needed), and what it proves or reveals.
3) Build a recommended beat order for a 1,500–2,000 word feature (include where the nut graf belongs).
4) Flag any quote that seems risky (unclear, exaggerated, unverified) and suggest how to verify or paraphrase safely.
5) Provide a drafting plan: which beats become subheads, which become transitions, and where to add data/documents to avoid “he said/she said.”
Output: Beat map table + recommended order + risky-quote flags + drafting plan.
79. Mini Sidebar Builder
Create a useful callout box editors love.
Category: Service/How-To
Inputs:
– Story angle: [ANGLE]
– Reader goal: [READER_GOAL]
– Key terms: [KEY_TERMS]
– Tools/resources involved: [TOOLS]
– One key timeline: [TIMELINE]
Task: Propose 6 sidebar options and then fully draft the best 2. For each option include:
1) Sidebar type (glossary/checklist/timeline/tools/FAQ/decision tree).
2) Placement in the article and what problem it solves for the reader.
3) Draft content in a clean format (bullets/table) with placeholders where needed (e.g., [INSERT_LINK] ).
4) Editorial guardrails: what not to claim, how to cite sources, and how to avoid sounding like affiliate marketing.
Finish with a one-sentence note I can add in the pitch: “Includes a sidebar: [SIDEBAR_NAME].”
Output: 6 sidebar ideas + 2 fully drafted sidebars + pitch note.
80. SEO Without Selling Out
Make your story search-ready while staying editorial.
Category: Style
Inputs:
– Target keyword(s): [KEYWORDS]
– Angle: [ANGLE]
– Intended word count: [WORD_COUNT]
– Competitor headlines (paste 5): [COMP_HEADLINES]
– Must-keep editorial tone: [TONE]
Task: Create a “search-ready but editorial” brief:
1) Suggest 8–12 keyword variations and related questions real readers ask (not spam).
2) Provide a headline + deck that includes a keyword naturally and stays accurate.
3) Propose an outline that satisfies search intent but still reads like a magazine piece (include 1 narrative/scene slot and 1 skeptic/critics slot).
4) Provide a “keyword hygiene” rule: where keywords belong (headline, subhead, early paragraph) and where they don’t (repetition, awkward stuffing).
5) Write a meta description (150–160 chars) and 5 internal anchor-text suggestions that feel natural.
6) Provide a final editorial check: 10 tests to ensure the piece still feels human and reported (specificity, evidence, voice, fairness).
Output: Search-ready brief + headline/deck + outline + meta + editorial check.
81. Newsletter‑Angle Pitch Variant
Reframe your story for a newsletter editor.
Category: Pitch Writing
Inputs:
– Story angle: [ANGLE]
– Reader promise: [READER_PROMISE]
– One verified surprising detail: [SURPRISE_DETAIL]
– Sources I can cite: [SOURCES]
– Length target (newsletter): [NEWSLETTER_LENGTH]
Task: Write a newsletter-ready pitch that includes:
1) Subject line + 3 alternatives (short, punchy).
2) A hook paragraph (2–3 sentences) with one concrete detail and a “why now.” Use a placeholder if I haven’t provided the detail.
3) A bullet mini-outline for a newsletter format (setup → insight → evidence → takeaway → what to watch).
4) A credibility line for a beginner (“I can report this because…”).
5) A “link pack” plan: 4–6 links I will include and what each supports (primary/secondary). Use [INSERT_LINK] placeholders.
6) A clear ask and turnaround timeline (when I can file).
Output: A ready-to-send newsletter pitch email + mini-outline + link pack plan.
82. Turn One Story Into Three Pitches
Multiply your chances without duplicating work.
Category: Pitch Editing
Inputs:
– Core story: [TOPIC]
– Angle: [ANGLE]
– Reporting plan: [REPORTING_PLAN]
– Outlet A: [OUTLET_A] (audience: [AUD_A])
– Outlet B: [OUTLET_B] (audience: [AUD_B])
– Outlet C: [OUTLET_C] (audience: [AUD_C])
Task: Create 3 pitch variants using the same reporting core. For each outlet produce:
1) Fit statement (1–2 sentences) referencing the outlet’s likely priorities (service vs culture vs business).
2) Subject line + hook paragraph + nut graf (tight).
3) A reporting plan bullet list customized to what that outlet values (e.g., more “how-to” structure vs more “systems” reporting).
4) A “what I will not do” line that signals ethics (no invented facts, no PR copy).
5) A respectful sequencing rule: which outlet to pitch first, how long to wait, and how to avoid simultaneous submissions if the outlet discourages it.
Output: Three distinct, ready-to-send pitches + a sequencing/etiquette plan.
83. Editor Relationship Follow‑Up System
Follow up without annoying people (and get more yeses).
Category: Career
Inputs:
– Editor name: [EDITOR_NAME]
– Outlet: [PUBLICATION_NAME]
– Pitch title/topic: [PITCH_TOPIC]
– Status (no reply/rejected/maybe): [STATUS]
– My next idea (optional): [NEXT_IDEA]
Task: Build a relationship follow-up kit:
1) A 4-step follow-up ladder for no-reply pitches (Day 4, Day 10, Day 18, Day 30) with messages that get shorter and more respectful over time.
2) A rejection response template that keeps the door open and asks one useful question (“Was this fit, timing, or angle?”).
3) A post-publication thank-you note that is warm but professional, plus a “next pitch” bridge line that doesn’t pressure.
4) A light-touch “stay on radar” plan: what to share (relevant links, your published work), how often, and what to never do.
5) A tracking system: the exact columns for a pitch CRM and a weekly review ritual (what you measure and how you adjust).
Output: Follow-up ladder + templates + stay-on-radar plan + pitch CRM columns + weekly review ritual.
84. Post‑Clip Strategy: Where to Pitch Next
Turn one published guest post into better opportunities.
Category: Career
Inputs:
– Published clip link/title: [CLIP]
– What I learned while reporting: [LEARNINGS]
– Unanswered questions: [UNANSWERED]
– New access I gained: [NEW_ACCESS]
– Goal (rate/outlet/beat): [GOAL]
Task: Create a clip-leverage plan:
1) Identify 6 follow-up story angles that build on the clip (deeper reporting, different geography, skeptic angle, “what broke,” “who benefits,” policy angle). Each must include a one-sentence pitch core and a reporting next step.
2) Propose a “publication ladder” list: 10 outlet types to pitch next (not necessarily names), ordered by difficulty, and what changes in tone/reporting for each step.
3) Create a rate-upgrade plan: how to cite the clip as credibility, what to ask for, and how to frame scope increases (expenses, kill fee).
4) Provide 3 ready-to-send pitch templates referencing my clip without sounding braggy.
5) Give me a 14-day action checklist to execute immediately after publication (share strategy, outreach, next pitches).
Output: 6 follow-up angles + publication ladder + rate upgrade plan + 3 pitch templates + 14-day checklist.
85. Angle → Assignment: Editor Decision Packet
Turn an idea into a commission-ready decision memo.
Category: Pitch Writing
Inputs:
– Topic: [TOPIC]
– Angle: [ANGLE]
– Why now: [WHY_NOW]
– Access: [ACCESS]
– Evidence I have: [EVIDENCE_HAVE]
– Evidence I will obtain: [EVIDENCE_GET]
– Word count: [WORD_COUNT]
Task: Produce an “editor decision packet” that includes:
1) A one-sentence reader promise (specific, no hype).
2) A 120–160 word pitch core (hook + stakes + novelty).
3) A mini-outline (6–8 bullets) where every bullet includes an evidence placeholder like [INSERT_QUOTE] / [INSERT_DATA] / [INSERT_DOC].
4) A reporting plan (interviews + documents + verification steps). Include a backup plan if a key source fails.
5) A risk register: 6 risks (access, defamation, data gaps, safety) + how I mitigate each.
6) A “why me” line that’s honest for a beginner (skills, access, lived context).
7) A delivery timeline (outline date, draft date, fact-check packet date).
Output: A single scannable decision packet + a ready-to-send email subject line + closing ask.
86. Expense Plan + Budget Justification
Ask for expenses professionally (without sounding difficult).
Category: Freelance Ops
Inputs:
– Reporting needs: [REPORTING_NEEDS]
– Deadline window: [DEADLINE]
– Travel needed? [TRAVEL]
– Tools/services I might need: [TOOLS]
– Approximate budget ceiling (if any): [BUDGET_CEILING]
Task: Create an expense plan pack with:
1) A simple budget table: item, purpose, cost estimate, cheaper alternative, and “must-have vs nice-to-have.”
2) A justification paragraph: how each expense improves reporting quality or verification (not comfort).
3) Two email templates to the editor: (A) ask upfront before assignment, (B) ask after assignment but before spending. Include polite language and options (“I can do X remotely if budget is tight”).
4) A receipts workflow: what to save, how to label, and when to invoice (plus a line about pre-approval).
5) A beginner “expense boundaries” list: what never to expense, what to always pre-approve, and how to avoid surprises.
Output: Budget table + justification + 2 expense-request emails + receipts/invoicing workflow + boundaries list.
87. Fact vs Interpretation Markup Pass
Separate what you know from what you think.
Category: Fact-Checking
Inputs:
– Draft excerpt (paste 600–1,400 words): [DRAFT_EXCERPT]
– Source links/notes (paste): [SOURCES]
– High-risk areas: [HIGH_RISK]
Task: Perform a markup pass with a clear legend:
1) Tag each sentence as FACT / INTERPRETATION / SPECULATION / FRAMING.
2) For each FACT: specify the required citation or verification method (link, doc, interview recording).
3) For each INTERPRETATION: identify the supporting facts and add a skeptic check (what a critic would say).
4) For each SPECULATION: rewrite into a defensible forward-looking statement with conditions (“If X continues…”) or cut it.
5) Flag any FRAMING that sounds loaded or unfair; offer neutral rewrites.
6) Create a “claim repair list”: top 12 sentences that need stronger sourcing, and what to do next (interview, data pull, public record). Use placeholders like [INSERT_SOURCE_LINK].
Output: Tagged draft + verification notes + neutral rewrites + claim repair list.
88. Deepfake/AI Image Risk + Verification
Handle manipulated media safely and ethically.
Category: Ethics
Inputs:
– Media items I plan to reference (links/descriptions): [MEDIA_ITEMS]
– What I want to claim based on them: [MEDIA_CLAIMS]
– Sensitivity level (low/med/high): [SENSITIVITY]
– Entities involved: [ENTITIES]
Task: Create a media verification protocol:
1) A step-by-step checklist: source of origin, upload timeline, reverse image/video search, metadata hints (where available), cross-post comparisons, and corroborating reporting.
2) A “confidence grading” system (high/medium/low) and what actions raise confidence.
3) Safe language templates for each confidence level (how to describe what’s visible vs what’s inferred).
4) Ethical guardrails: when to avoid embedding, how to avoid amplifying harmful content, and how to blur/redact sensitive info.
5) A right-of-reply plan if media implicates a person or organization.
6) A short editor note: what I verified, what I couldn’t, and how I framed uncertainty.
Output: Verification checklist + confidence grading + safe language templates + ethics guardrails + editor note.
89. Timeline Builder + Date Hygiene
Prevent date errors and ‘timeline mush.’
Category: Research
Inputs:
– Events I mention (paste bullets): [EVENTS]
– Sources for dates (paste links/notes): [DATE_SOURCES]
– Geography/time zones involved: [TIMEZONES]
– Draft excerpt (optional): [DRAFT_EXCERPT]
Task: Build a “timeline + date hygiene” pack:
1) Create a timeline table: Date (ISO + human), event, evidence source, confidence, and “needs confirmation?”
2) Identify ambiguous timing words (“recently,” “last year,” “soon”) and rewrite them into specific dated language or a clearly bounded window (“in early [YEAR]”).
3) Flag any implied causality where timing doesn’t prove it; rewrite into cautious but readable language.
4) Provide a date consistency checklist: time zones, daylight savings, fiscal years, release dates vs announcement dates.
5) Produce a “timeline paragraph” draft (130–180 words) that cleanly summarizes the sequence for readers, using placeholders where a date is not yet confirmed.
Output: Timeline table + rewrites + causality cautions + date checklist + timeline summary paragraph.
90. Explainer Ladder: Beginner → Expert
Explain complex tech in layers without losing anyone.
Category: Writing Craft
Inputs:
– Concept to explain: [COMPLEX_CONCEPT]
– Audience level mix: [AUDIENCE_MIX]
– Jargon terms that must be used: [MUST_USE_TERMS]
– Common misconceptions: [MISCONCEPTIONS]
– Evidence sources I can cite: [SOURCES]
Task: Build an explainer ladder with 4 layers:
1) Layer 1 (40–70 words): simplest accurate explanation, no jargon.
2) Layer 2 (90–140 words): introduce key terms with quick definitions and one safe analogy.
3) Layer 3 (120–180 words): mechanism—how it works, what inputs/outputs are, where it fails. Include placeholders like [INSERT_EXAMPLE].
4) Layer 4 (100–160 words): limits, tradeoffs, and what experts debate. Include a skeptic note and evidence placeholders like [INSERT_STUDY].
Then provide: a mini glossary (term → definition → common misuse), 10 “precision sentences” to avoid overclaiming, and guidance on where to place each layer in a feature so it doesn’t become a textbook.
Output: 4-layer explainer + glossary + precision sentences + placement guidance.
91. Critic Placement + Balance Strategy
Add skepticism at the right moment (not as an afterthought).
Category: Structure
Inputs:
– Outline or section headings (paste): [OUTLINE]
– Strongest pro-claim: [PRO_CLAIM]
– Strongest criticism: [CRITICISM]
– Sources on each side: [SOURCES]
Task: Create a “balance placement plan”:
1) Map my outline into 6–9 beats and identify where a critic naturally belongs (early tension, mid-proof stress test, pre-conclusion limits).
2) For each critic placement, propose the exact function: challenge a claim, reveal incentives, show a failure case, or define boundaries.
3) Write 6 transition templates that introduce criticism gracefully (“But the data has a catch…”, “Not everyone buys that…”), avoiding straw-men and snark.
4) Provide a “fairness kit”: questions to ask critics and supporters so neither side gets a free pass.
5) Create a final paragraph template that synthesizes: what we know, what’s debated, what to watch next—without false balance.
Output: Placement map + beat functions + transition templates + fairness questions + synthesis paragraph template.
92. Data/Document Request Emails + Tracker
Get the files that make your story real.
Category: Reporting Plan
Inputs:
– Organization types: [ORG_TYPES]
– Specific docs/data needed: [DOCS_NEEDED]
– Deadline: [DEADLINE]
– Sensitivity level: [SENSITIVITY]
Task: Deliver a document request toolkit:
1) 3 email templates: (A) company/PR request, (B) nonprofit/academic request, (C) government/agency request (informal). Each includes: context, what I’m writing, exact ask, deadline, and how I will use/cite information.
2) A follow-up ladder: Day 3, Day 7, Day 12 (shorter each time, polite exit line).
3) A “document verification” checklist: confirming versions, dates, authorship, and how to cite properly (page/section).
4) A tracking table: request sent, person, doc requested, status, next follow-up date, notes, and file location.
5) Ethical guardrails: data privacy, confidentiality, and what I should not promise (e.g., “I’ll publish exactly as you want”).
Output: 3 email templates + follow-up ladder + verification checklist + tracking table + ethics guardrails.
93. Texture Checklist: Make Reporting Feel Real
Add specific details without inventing anything.
Category: Writing Craft
Inputs:
– Where the story happens (places/online spaces): [WHERE]
– Who I interviewed: [INTERVIEWEES]
– Artifacts I have (screenshots, manuals, logs, receipts): [ARTIFACTS]
– One scene I can describe: [SCENE]
Task: Create an ethical “texture toolkit”:
1) A checklist of 30 safe texture sources (sounds, interfaces, timestamps, physical objects, UI labels, quoted phrases, document headings, settings menus, meeting agendas).
2) A “detail-to-proof” rule: for every detail, what is its source type (observed, quoted, document, data)? Provide a tagging system I can use in drafts.
3) Generate 12 texture prompts tailored to my story (questions I should ask or things I should look for).
4) Rewrite my scene [SCENE] into 180–230 words using placeholders like [INSERT_OBSERVED_DETAIL] / [INSERT_UI_LABEL] / [INSERT_QUOTE] where needed—no invented specifics.
5) Provide a final caution list: 10 details that are tempting to invent and why they are dangerous (brand names, dates, emotions, motives).
Output: Texture checklist + tagging rules + tailored prompts + rewritten scene + caution list.
94. Global Readability Pass
Rewrite for international readers without flattening the voice.
Category: Style
Inputs:
– Draft excerpt (paste 700–1,400 words): [DRAFT_EXCERPT]
– Regions I expect readers from: [REGIONS]
– Concepts likely unfamiliar: [UNFAMILIAR_CONCEPTS]
– Terms that must remain: [MUST_KEEP]
Task: Perform a global readability pass:
1) Identify local idioms, slang, and culture references; propose globally clear alternatives or quick context in 5–12 extra words.
2) Flag legal/policy assumptions that vary by region; rewrite into bounded statements (“In [COUNTRY], …”).
3) Simplify sentence structure where needed without making it childish. Keep key technical terms but define them once.
4) Provide a “context box” draft (80–120 words) that briefly explains background for non-local readers, using placeholders for country-specific details.
5) Provide a 12-point global clarity checklist I can run on future drafts (dates, currency, units, acronyms, time zones, names).
Output: Rewritten excerpt + context box + global clarity checklist + list of changes made.
95. Story Packaging: Web + Newsletter + Audio Add‑On
Offer an editor multiple ways to run your work.
Category: Pitch Writing
Inputs:
– Core angle: [ANGLE]
– Reader promise: [READER_PROMISE]
– Reporting assets I’ll have: [ASSETS] (quotes, data, scenes)
– Word count target: [WORD_COUNT]
– Turnaround time: [TURNAROUND]
Task: Produce a packaging bundle:
1) Web feature outline (6–8 sections) with evidence placeholders.
2) Newsletter version plan (450–700 words): what to cut, what to keep, and a “one big insight” structure.
3) Audio script plan (2–4 minutes): cold open, narration beats, quote clips placeholders, and a clean takeaway.
4) A deliverables list I can put in the pitch (“I can deliver: web draft + newsletter adaptation + audio-ready script”). Include realistic constraints (“no audio production, script only”).
5) A short pitch paragraph to the editor offering these options without sounding pushy.
Output: Web + newsletter + audio packaging plans + deliverables list + pitch paragraph.
96. Pitch‑Specific Bio + ‘Why Me’ Lines
Write credibility lines that match the exact story.
Category: Career
Inputs:
– My background (facts only): [BACKGROUND]
– Relevant skills/tools: [SKILLS]
– Access advantage (languages/region/communities): [ACCESS]
– Best clips (titles + links): [CLIPS]
– The pitch angle: [ANGLE]
Task: Create a pitch-specific credibility kit:
1) 12 “why me” one-liners (different flavors: access, method, lived context, technical competence, data competence, ethics). Mark the top 3.
2) Three mini bios: 25 words, 60 words, 110 words—each tailored to this exact pitch topic.
3) A signature block template optimized for editors: beat, links, availability, time zone, and how fast I reply.
4) A “beginner credibility” checklist: what to include (process, verification) and what to avoid (overclaiming expertise).
5) A final pitch paragraph that combines angle + why me + timeline in 4–6 sentences, ready to paste into an email.
Output: 12 why-me lines + 3 bios + signature block + credibility checklist + paste-ready pitch paragraph.
97. Corrections + Clarifications Template Pack
Handle updates politely, precisely, and fast.
Category: Editing
Inputs:
– Publication: [PUBLICATION_NAME]
– Problem type (correction/clarification/update): [PROBLEM_TYPE]
– Original line(s): [ORIGINAL_LINE]
– Correct info + source: [CORRECT_INFO]
– Urgency level: [URGENCY]
Task: Create a template pack:
1) A correction request email (short, factual) including: what’s wrong, what’s correct, evidence link, and suggested replacement text.
2) A clarification request email (when wording is misleading but not strictly wrong). Provide 2 rewritten sentence options.
3) An update note email (new info emerged). Include: what changed, why it matters, and whether it affects the thesis.
4) A public-facing correction note template (only if needed) written neutrally.
5) A change log table: date, section, original, new, evidence, editor decision.
Also include a “don’t do this” list: 10 ways writers accidentally make corrections worse (overexplaining, blame, vague language).
Output: 4 templates + change log table + don’t-do list.
98. Fact‑Check Automation: Claim Table Builder
Generate a professional claim table from your draft.
Category: Fact-Checking
Inputs:
– Draft text (paste): [DRAFT_TEXT]
– Source links/notes (paste): [SOURCE_NOTES]
– Interview list (paste): [INTERVIEWS]
Task: Build a claim table workflow and apply it:
1) Extract 25–40 distinct claims from my draft (prioritize: numbers, timelines, comparisons, allegations, technical assertions).
2) For each claim, output a row with: claim text, claim type, required proof, source (URL/doc/interview), location in source (page/section/timestamp), verification steps, and confidence (H/M/L).
3) Flag “source mismatch” where the draft says more than the source supports; suggest safer wording.
4) Identify the top 10 highest-risk claims and list what extra reporting would raise confidence.
5) Output an editor-friendly “link pack” grouped by my draft headings.
Use placeholders like [INSERT_SOURCE] if I didn’t provide a source—do not guess or fabricate.
Output: Claim table + mismatch flags + safer rewrites + high-risk list + organized link pack.
99. Narrative Tension Without Drama
Build momentum using stakes and tradeoffs, not hype.
Category: Structure
Inputs:
– Angle: [ANGLE]
– Stakes (who it affects): [STAKES]
– What’s uncertain/debated: [UNCERTAINTY]
– Evidence I have: [EVIDENCE]
– Outline (optional): [OUTLINE]
Task: Create a tension plan for a feature:
1) Identify 5 “tension sources” appropriate for this topic (tradeoff, constraint, failure mode, incentive conflict, measurement uncertainty).
2) Map tension beats across the outline: where to raise a question, where to stress-test it, where to reveal limits, and where to pay off.
3) Provide 15 sentence templates that create momentum without hype (“The catch is…”, “But the numbers hide…”, “It works—until…”).
4) Draft a 180–240 word “tension spine” paragraph sequence using placeholders like [INSERT_DETAIL] and [INSERT_QUOTE] so I can insert reporting later.
5) Provide a “no-drama rulebook”: 12 red flags (loaded words, certainty leaps, villain framing) and safe alternatives.
Output: Tension sources + beat map + template lines + sample tension sequence + no-drama rulebook.
100. Consent + Recording Script
Protect sources and yourself during interviews.
Category: Ethics
Inputs:
– Interviewee type(s): [INTERVIEWEE_TYPES]
– Sensitivity level: [SENSITIVITY]
– Recording method: [RECORDING_METHOD]
– Attribution preference (on/off/anon): [ATTRIBUTION]
Task: Create an interview consent toolkit:
1) A pre-interview message template (email/text) that explains purpose, time, and consent basics.
2) A spoken consent script (30–60 seconds) that asks to record, explains how quotes are used, and confirms attribution rules.
3) A plain-English explanation of on-the-record, off-the-record, on background, and anonymous sourcing—plus what to do if someone tries to change terms mid-interview.
4) A vulnerable-source checklist: safety risks, identity protection, and what I should never promise.
5) A post-interview follow-up email template: thanks, key points, document requests, and permission to follow up.
Output: Pre-message + spoken script + rules explainer + vulnerable checklist + follow-up template.
101. Quote Hygiene: Paraphrase vs Quote Rules
Avoid accidental plagiarism and misquotation.
Category: Editing
Inputs:
– Interview notes/transcripts (paste): [TRANSCRIPTS]
– Draft excerpt (paste): [DRAFT_EXCERPT]
– Sources used (links): [SOURCES]
Task: Create a quote hygiene pack:
1) Rules for when to quote vs paraphrase vs summarize (with rationale).
2) A 10-step workflow: transcript → highlight → verify → select → integrate → attribute → confirm.
3) Show 8 examples of “bad paraphrase” vs “good paraphrase” patterns (generic examples) so I understand what’s too close to a source.
4) Provide attribution templates that vary rhythm and avoid repeated “he said.” Include how to attribute studies and reports clearly.
5) Create a “quote verification checklist”: confirm wording, confirm context, confirm permission if sensitive, confirm timestamps, confirm that quotes don’t overstate certainty.
6) Give a final pre-submit scan: what to search for in my draft (quotation marks, missing names, claims without attribution).
Output: Rules + workflow + paraphrase examples + attribution templates + verification checklist + pre-submit scan steps.
102. Decision‑Tree Service Article Builder
Help readers choose the right option fast.
Category: Service/How-To
Inputs:
– Reader goal: [READER_GOAL]
– Constraints (budget/time/privacy): [CONSTRAINTS]
– Options to compare: [OPTIONS]
– Testing criteria: [CRITERIA]
– Safety/ethics concerns: [SAFETY]
Task: Build a decision-tree service blueprint:
1) A short intro that frames the reader’s decision and stakes.
2) A decision tree in plain text (if/then) with 8–14 nodes, guiding the reader to the best option for their situation. Include “avoid this if…” branches for safety/privacy.
3) A comparison table: option, best for, tradeoffs, hidden costs, and verification notes (what evidence supports each claim). Use placeholders like [INSERT_TEST_RESULT].
4) A “quick start” checklist (5–8 steps) and a troubleshooting section.
5) A transparency/disclosure block: how to mention affiliates/freebies (if any) and how to keep tone editorial.
Output: Decision tree + comparison table + quick start + troubleshooting + transparency block.
103. Headline Testing Plan + Selection Rubric
Choose the best headline without guessing.
Category: Headlines
Inputs:
– Reader promise: [READER_PROMISE]
– Angle: [ANGLE]
– Keywords: [KEYWORDS]
– 10 headline candidates (paste): [HEADLINES]
– Risky words to avoid: [BANNED]
Task: Create a headline testing pack:
1) A scoring rubric (0–5) for: truthfulness (supported by article), specificity, curiosity, clarity, and search alignment.
2) Score each headline I provided and explain the top 3 winners and top 3 losers (why).
3) Rewrite the top 3 into 2 improved variants each: one more “clarity-first,” one more “curiosity-first,” both still accurate.
4) Provide 6 deck/subhead options that strengthen the best headline and add stakes.
5) Provide a “truth test”: for the winning headline, list the exact supporting sentence/section the article must contain (or a placeholder like [ADD_SUPPORTING_LINE]).
6) Provide a mini A/B plan for social or newsletter (what to test, how long, what metric signals “better”).
Output: Scored headline table + improved winners + decks + truth test + mini A/B plan.
104. Timeboxing Plan: Report → Draft → Revise
Finish reliably even if you have a day job.
Category: Career
Inputs:
– Hours available per week: [HOURS_PER_WEEK]
– Deadline date: [DEADLINE]
– Word count: [WORD_COUNT]
– Reporting complexity (low/med/high): [COMPLEXITY]
– Key constraints: [CONSTRAINTS]
Task: Build a timeboxing plan with:
1) A week-by-week schedule (or day-by-day if deadline is under 14 days). Include: interviews, research, drafting, fact-check table, revisions, final polish, submission packet.
2) A “minimum viable reporting” checklist (what must happen for the story to be honest and credible).
3) A writing cadence: how many words per session, when to outline, when to draft, when to stop editing mid-draft.
4) A catch-up protocol: what to cut first if behind (sections, optional sidebars) while protecting the spine and verification.
5) A simple tracker template: tasks, due dates, status, blockers, next action.
Output: Timeboxed schedule + MV reporting checklist + cadence rules + catch-up protocol + tracker template.
105. Idea Bank Generator: 30 Pitchable Concepts
Generate a month of WIRED-like pitches you can actually report.
Category: Idea Discovery
Inputs:
– My beats/interests: [BEATS]
– Region/language advantage: [REGION]
– Access (communities, industries): [ACCESS]
– Time per story: [TIME_PER_STORY]
– Budget level: [BUDGET_LEVEL]
– Topics to avoid: [AVOID]
Task: Generate 30 ideas, each with:
1) Working headline + 1-sentence hook.
2) Why now (timely but not breaking news).
3) What’s counterintuitive or under-covered (the “WIRED-ish twist”).
4) Reporting plan: 3 interview targets (types), 3 documents/datasets, and 1 “texture” observation I can do without travel if needed.
5) Risk check: what could make the idea weak (hype, access, evidence) and how to de-risk.
6) Best format category: Feature / Service / Business / Science / Culture / Newsletter.
Then pick the top 7 ideas for a beginner and explain why they’re the fastest path to publishable clips.
Output: 30 idea cards + top 7 picks + a one-week plan to pitch the first 5.
106. Pitch ‘One‑Screen’ Audit (Editor Scan Test)
Make your pitch pass the 10‑second editor scan.
Category: Pitch Editing
Inputs:
– Publication: [PUBLICATION_NAME]
– My pitch email (paste): [PITCH_TEXT]
– Any links I included: [LINKS]
– Access I truly have: [ACCESS]
– Deadline I can meet: [DEADLINE]
Task: Run a “one-screen audit” (what fits on one screen of an editor’s phone). Deliver:
1) A scan verdict (Yes/Maybe/No) with 3 reasons.
2) A missing-info list: exactly what’s unclear (angle, novelty, proof, scope, who’s affected, why now).
3) A proof plan check: list every claim that needs evidence and what evidence would satisfy it (doc/data/interview). If missing, insert [INSERT_SOURCE] placeholders.
4) A fit rewrite: rewrite my pitch into a tighter version (180–230 words) with a stronger hook, a cleaner nut graf, and a mini-outline (5 bullets) where each bullet includes an evidence placeholder.
5) A subject line pack: 8 subject lines and mark the safest 2.
6) A “beginner credibility” line that is honest but confident (process + access + verification).
7) A final checklist I must run before sending: attachments, links, timeline, and polite close.
Output: Editor audit + rewritten pitch + subject lines + send checklist.
107. Turn a Hot Take into a Reported Feature
Convert opinion into evidence-driven magazine reporting.
Category: Structure
Inputs:
– Hot take paragraph(s) (paste): [HOT_TAKE]
– Who/what it impacts: [WHO_AFFECTS]
– My access: [ACCESS]
– Regions in scope: [SCOPE]
Task: Produce a conversion kit:
1) Identify the strongest underlying question inside my hot take (what the story can actually prove).
2) Rewrite the thesis into a defensible reporting hypothesis (“The reporting will likely show that…”).
3) Build a 7–9 section outline where each section has a job + a proof unit (e.g., [INSERT_DOC], [INSERT_DATA], [INSERT_INTERVIEW]).
4) Add a skeptic lane: where to place the best counterargument and what evidence would fairly represent it.
5) Create a reporting plan: 10 interview target types + 10 document/data targets + a verification checklist for numbers and timelines.
6) Identify bias risks in my original hot take and propose neutral language rewrites that keep the energy but reduce moralizing.
7) Draft a magazine-style nut graf (2–3 sentences) that promises value, not outrage.
Output: Reported-feature blueprint + outline + reporting plan + neutral thesis + nut graf.
108. Triangulation Map: 3‑Proof Rule for Claims
Make key claims survive scrutiny with multiple proofs.
Category: Fact-Checking
Inputs:
– Key claims (10–20) (paste): [CLAIMS]
– Current sources/notes (paste): [SOURCES]
– Highest-risk claims: [HIGH_RISK]
Task: Build a triangulation map:
1) Classify each claim type: number/statistic, timeline, technical mechanism, allegation/criticism, comparative claim, prediction.
2) For each claim, propose 3 independent proof paths (e.g., primary document + neutral expert + dataset). Explain why each is independent and what it can/can’t prove.
3) Create a table: claim → proof #1 → proof #2 → proof #3 → confidence (H/M/L) → next action.
4) Identify claims that should be softened or cut because 3-proof is unrealistic; provide safer rewrites that still inform readers.
5) Add an “evidence hierarchy” cheat sheet: what evidence types are strongest for tech/culture stories (audits, court docs, peer-reviewed, filings, reputable datasets) and which are weak (press releases, anonymous blogs, viral threads).
6) Provide a final 10-minute verification sprint I can run before submission (numbers, dates, names, attribution).
Output: Triangulation table + risky-claim rewrites + evidence hierarchy + verification sprint.
109. Interview Question Bank by Source Type
Generate sharp questions that reveal proof, incentives, and tradeoffs.
Category: Sourcing
Inputs:
– Angle: [ANGLE]
– Claims to prove: [CLAIMS]
– Source types: [SOURCE_TYPES] (e.g., founder, engineer, regulator, critic, user, worker, academic)
– Sensitivity level: [SENSITIVITY]
Task: Build a question bank with sections:
1) Universal questions (10): evidence, timeline, “what changed,” measurement, tradeoffs, and what they won’t say publicly.
2) Source-type packs (8 packs, 10 questions each): Founder/PR, Engineer/Implementer, Neutral Expert, Regulator/Policy, Affected Person/User, Critic/Competitor, Auditor/Investigator, Historian/Context expert.
3) Incentives probes: questions that reveal funding, goals, pressure, and what success is measured as—without being hostile.
4) Verification questions: how to confirm numbers, ask for documents, request demos, request logs/screenshots, request references.
5) Safety/ethics add-ons for sensitive topics: consent wording, anonymity options, “off the record” boundaries, and what not to ask.
6) A mini “interview-to-draft” plan: how to mark quotes, extract beats, and avoid quote soup.
Output: Question bank + incentives probes + verification asks + ethics add-ons + interview-to-draft plan.
110. Data Cleanup + Caveats + Chart Readiness
Prepare data so your chart is honest and readable.
Category: Data & Visuals
Inputs:
– Data snippet/description (paste): [DATA]
– Source of data: [DATA_SOURCE]
– Time range: [TIME_RANGE]
– Intended claim: [INTENDED_CLAIM]
– Audience question: [QUESTION]
Task: Create a “chart readiness” pack:
1) Identify data quality issues: missing values, inconsistent units, duplicates, selection bias, survivorship bias, definition changes over time.
2) Define the correct denominators and baselines. If my claim needs per-capita or per-user normalization, explain exactly how and why.
3) Propose 3 chart options (chart types + axes + grouping) and explain the story each chart tells and what it cannot prove.
4) Write a caveat block (60–110 words) that I can include near the chart, with precise language and no overclaiming.
5) Provide a caption template (short + medium) and alt-text template, with placeholders like [INSERT_NUMBER] and [INSERT_DEFINITION].
6) Rewrite my intended claim into a defensible claim supported by the data, or explain what additional data is required.
Output: Data issues + normalization guidance + chart options + caveat block + caption/alt templates + defensible claim.
111. Ethical AI Use in Reporting + Disclosure Lines
Use AI to assist, not to fabricate—then disclose cleanly.
Category: Ethics
Inputs:
– How I used AI (list): [AI_USES]
– What I did not use AI for: [AI_NOT_USED_FOR]
– Any sensitive areas (health/legal/allegations): [SENSITIVE]
– Publication expectations (if known): [POLICY_NOTES]
Task: Build an ethical AI workflow for this story:
1) A clear “allowed vs not allowed” checklist for reporting (sources, quotes, numbers, claims). Include a rule: if I can’t cite it, I can’t claim it.
2) A transparency checklist: what I tell my editor, what I might tell readers (if required), and how to document AI outputs vs my reporting.
3) A fact-safety protocol: how to prevent hallucinations when using AI (always paste sources, verify every claim, use placeholders).
4) A plagiarism safety protocol: how to avoid copying phrasing from sources or AI outputs (rewrite from understanding; keep a citations log).
5) Write 8 disclosure sentences (short, neutral) for common scenarios: AI used for outline, language polish, summarizing my notes, generating interview questions, translating. Mark the safest 2.
6) Provide a “trust packet” I can attach: claim table, link pack, interview list, and what AI touched vs didn’t touch.
Output: Ethical AI checklist + safety protocols + disclosure lines + trust packet plan.
112. Plagiarism‑Proof Paraphrase Training Pass
Rewrite source material safely and still sound human.
Category: Editing
Inputs:
– Source excerpt (paste, short): [SOURCE_EXCERPT]
– My paraphrase attempt: [MY_PARAPHRASE]
– Where it will appear in my article: [SECTION]
– What claim it supports: [CLAIM]
Task: Teach and fix:
1) Diagnose similarity risks: copied phrasing, mirrored sentence structure, same metaphor, same sequence of points.
2) Provide a 5-step paraphrase method: understand → bullet key meaning → close source → rewrite in my voice → compare → cite.
3) Rewrite my paraphrase into 3 safe options: (A) tight paraphrase, (B) contextual paraphrase with added framing, (C) partial quote + paraphrase hybrid. Include citation placeholders like [CITE_SOURCE] and do not add new facts.
4) Provide a “citation clarity” template: how to attribute in-text (“According to…”) and how to cite documents/datasets.
5) Give me 10 rewrite patterns that reduce similarity (change subject order, swap active/passive, compress, expand, change metaphor, change clause order).
6) End with a quick self-check: 7 questions I ask before I keep any paraphrase in my draft.
Output: Risk diagnosis + method + 3 safe rewrites + citation template + rewrite patterns + self-check.
113. Policy/Regulation Explainer Pack
Explain rules accurately without turning into legalese.
Category: Research
Inputs:
– Jurisdiction: [JURISDICTION]
– Policy/regulation name: [POLICY_NAME]
– The part relevant to my story: [RELEVANT_SECTIONS]
– Who is affected: [AFFECTED_PARTIES]
– Links to primary text/analysis (paste): [LINKS]
Task: Build an explainer pack with:
1) A plain-English summary in 120–180 words: what it is, what it changes, and who it applies to (avoid overclaiming).
2) A “what the text says vs what people claim” table (5–10 rows), using direct paraphrase and citation placeholders.
3) A timeline: proposal → passage → enforcement → key dates, with a confirmation status column.
4) A stakeholder map: regulators, industry, civil society, affected users/workers, and critics—plus what each wants.
5) Three reporting angles: enforcement reality, unintended consequences, and compliance costs/loopholes.
6) A safe-language toolkit: phrases that accurately communicate uncertainty (“guidance suggests…”, “the rule appears to…”).
Output: Plain-English explainer + says/claims table + timeline + stakeholder map + angles + safe-language toolkit.
114. Company Profile Risk Guardrails + Right‑of‑Reply
Report on companies without becoming PR—or reckless.
Category: Ethics
Inputs:
– Company/project: [COMPANY_OR_PROJECT]
– My draft claims about them: [CLAIMS]
– Evidence I have: [EVIDENCE]
– What I suspect but can’t prove: [SUSPECTIONS]
– Contacts/PR email: [PR_CONTACT]
Task: Create guardrails and a right-of-reply plan:
1) A “PR smell test” list (12 signals) and how to rewrite into editorial language with evidence requirements.
2) A “risk claim audit”: highlight sentences that assert wrongdoing or intent; rewrite into provable language or add [INSERT_DOCUMENT] placeholders.
3) A balanced sourcing plan: supporters, neutral experts, critics/competitors, affected users/workers, regulators—what each adds and how each can distort.
4) A right-of-reply email template to the company with specific questions tied to claims, a deadline, and a neutral tone.
5) A transparency block: how to mention what the company declined to answer and what you verified independently.
6) A final “editor comfort” checklist: what must be true (and cited) before publishing any strong critique.
Output: Guardrails + rewrites + balanced sourcing plan + right-of-reply email + transparency block + editor checklist.
115. Global/India Lens Builder
Find a local angle with global relevance and proof.
Category: Publication Fit
Inputs:
– Topic: [TOPIC]
– Local context (city/region/community): [LOCAL_CONTEXT]
– Global system involved (platform, policy, supply chain): [GLOBAL_SYSTEM]
– Access I have: [ACCESS]
– Constraints (time/budget): [CONSTRAINTS]
Task: Produce 6 pitchable angles that connect local + global. For each angle include:
1) Working headline and a one-sentence hook.
2) The “bridge”: how the local story reveals something global readers care about (tradeoff, incentive, failure mode).
3) Reporting plan: 3 interview target types + 3 documents/datasets + 1 texture observation (can be remote).
4) A skeptic lane: what the best counterargument is and how you’d test it.
5) A “why now” that isn’t breaking news (policy change, adoption curve, quiet shift).
Then pick the strongest 2 angles for [PUBLICATION_NAME] and write a 190–230 word pitch email for each, with evidence placeholders like [INSERT_DATA] and [INSERT_QUOTE].
Output: 6 angles + top 2 + 2 pitch emails (evidence-aware).
116. Ending Builder: Memorable Close Without Cheese
Write endings that land: implications, not motivational posters.
Category: Writing Craft
Inputs:
– Angle + thesis: [ANGLE]
– Outline (paste): [OUTLINE]
– Current ending draft: [ENDING_DRAFT]
– The strongest verified detail/quote: [STRONG_DETAIL]
– What remains uncertain: [UNCERTAINTY]
Task: Create 6 ending options in different styles (120–180 words each):
1) Implication ending (what the story changes for readers).
2) Tradeoff ending (benefit vs cost, clearly stated).
3) “What to watch next” ending (concrete indicators, not predictions).
4) Character return ending (return to the opening person/scene).
5) Systems ending (zoom out to incentives/policy without moralizing).
6) Question ending (only if answerable and specific).
Then evaluate each ending against a rubric: accuracy, earned tone, clarity, freshness, and emotional restraint. Pick the best ending and produce a final version that uses placeholders like [INSERT_VERIFIED_DETAIL] if needed. Finish with a mini checklist: 10 ending mistakes to avoid.
Output: 6 endings + scorecard + best ending (final) + checklist.
117. Subhead Architecture + Flow Blueprint
Make long pieces scannable and satisfying.
Category: Structure
Inputs:
– Current outline or headings: [HEADINGS]
– Reader promise: [READER_PROMISE]
– Key evidence assets: [EVIDENCE_ASSETS]
– Target tone: [TONE]
Task: Build a flow blueprint:
1) Rewrite my outline into 7–10 sections, each with a clear “job” statement and an evidence placeholder ([INSERT_QUOTE]/[INSERT_DATA]/[INSERT_DOC]).
2) Propose 12–18 subhead options in different styles (clarity-first, curiosity, contradiction, question—only if appropriate). Mark the top 8.
3) Provide transition sentences between every section (1–2 lines each) that keep momentum without fluff.
4) Add a pacing plan: where to place short sections, where to place an explainer, where to place a skeptic beat, where to place a data beat, and where to place a return-to-scene beat.
5) Provide a “section length budget”: approximate word targets per section to hit the final word count.
Output: Revised section map + best subheads + transitions + pacing plan + word budget.
118. Fact‑Check Packet Builder for Editors
Submit a pro ‘link pack’ and claim notes with your draft.
Category: Fact-Checking
Inputs:
– Draft text or key sections: [DRAFT]
– Links and documents list: [SOURCES]
– Interviews (names/roles + dates): [INTERVIEWS]
– Sensitive claims: [SENSITIVE]
Task: Build a submission packet that includes:
1) A claim table (25–40 rows): claim, type, source, exact location (page/section/timestamp), verification method, confidence (H/M/L). Use [INSERT_SOURCE] placeholders when missing.
2) A link pack grouped by draft sections with short annotations (“supports timeline”, “supports number”, “background only”).
3) An interview log: who, role, date, recording status, consent notes, and any attribution constraints.
4) A “numbers note” appendix: list every number, its unit, denominator, and time range.
5) A “limits & uncertainty” note: what the story cannot prove, what’s disputed, and how you framed it fairly.
6) A short cover note to the editor explaining what’s included and where to find things.
Output: Full fact-check packet + editor cover note (paste-ready).
119. WIRED‑ish Voice Tuning (Without Copying)
Upgrade tone: crisp, curious, precise—no imitation.
Category: Style
Inputs:
– Draft excerpt (paste): [DRAFT_EXCERPT]
– Topic: [TOPIC]
– Target vibe: [VIBE] (e.g., curious, skeptical, playful, urgent)
– Must-keep terminology: [MUST_KEEP]
Task: Provide a voice tuning pack:
1) Diagnose top 10 style issues: vague verbs, long sentences, jargon dumps, throat-clearing intros, repeated sentence openings, weak transitions.
2) Deliver a revised version of the excerpt with improved clarity and rhythm (no new facts). Mark any place that needs verification with [VERIFY] or [ADD_SOURCE].
3) Provide a “house voice recipe” I can reuse: sentence length mix, paragraph length, how often to define terms, how to use analogy safely, where to place skepticism.
4) Give me 15 rewrite patterns that create crispness (swap in concrete nouns, lead with the subject, cut filler, tighten qualifiers).
5) End with a 12-point style checklist I run before submitting any draft to a magazine editor.
Output: Diagnosis + revised excerpt + voice recipe + rewrite patterns + style checklist.
120. Pitch CRM + Weekly Review System
Track pitches like a pro (and improve your hit rate).
Category: Career
Inputs:
– Outlets I pitch: [OUTLETS]
– My common topics/beats: [BEATS]
– My constraints (time/week): [TIME]
– Current problems (ghosting, rejection, scope): [PROBLEMS]
Task: Create a pitch CRM and workflow:
1) A table schema (columns) for tracking: outlet, editor, email, pitch title, category, date sent, follow-up dates, status, response notes, next action, clip outcome, rate, and lessons.
2) A follow-up cadence rule that’s respectful (Day 4, Day 10, Day 18, Day 30) and includes “stop conditions.”
3) A weekly review ritual: what metrics to look at (reply rate, yes rate, time-to-response), and how to adjust (new angles, better fit, tighter pitches).
4) Templates: 3 follow-up messages + 1 rejection response + 1 “new idea” email that reopens the relationship politely.
5) A learning loop: how to tag pitches by why they failed (fit/timing/angle/proof) and what to do next week.
Output: CRM columns + follow-up rules + weekly ritual + templates + learning loop tags.
121. 30‑Day Pitch Calendar (Idea → Send → Follow Up)
A realistic monthly plan for consistent pitching.
Category: Career
Inputs:
– Beats: [BEATS]
– Target outlet types: [OUTLET_TYPES]
– Time zone: [TIMEZONE]
– Energy constraints: [CONSTRAINTS]
Task: Build a 30-day plan that includes:
1) Weekly themes (Week 1: build idea bank; Week 2: fit + proof; Week 3: send + follow-up; Week 4: refine + resubmit).
2) A day-by-day schedule (Mon–Sun) with tasks and timeboxes: 30–90 minute blocks. Include at least 2 “rest/admin” slots per week.
3) A pitch quota that matches my time (e.g., 2–5 pitches/week) and a rule for quality thresholds (no pitch sent without remembering evidence and outline).
4) A follow-up system integrated into the calendar (specific days).
5) A recycling protocol: how to re-angle a rejected pitch for another outlet without spam or simultaneous submissions if discouraged.
6) A weekly scoreboard: what I record and what improvement action I take next week.
Output: 30-day calendar + quality thresholds + follow-up/recycle rules + weekly scoreboard.
122. Academic Expert Outreach + Prep Sheet
Get smart expert quotes without wasting anyone’s time.
Category: Sourcing
Inputs:
– Topic: [TOPIC]
– Subfields involved: [SUBFIELDS]
– My angle: [ANGLE]
– Experts I found (names/links): [EXPERTS]
– Deadline: [DEADLINE]
Task: Create an academic outreach kit:
1) A short email template (120–170 words) that explains the story, why I’m contacting remembering them, the exact ask, and time request. Include an option for “email answers” if they can’t meet.
2) A follow-up template (shorter) and a gratitude note after the interview.
3) A prep sheet template: what I read before contacting them (2–3 papers), what I want to verify, and my top 8 questions (mechanism, uncertainty, what the field debates, what headlines get wrong).
4) A “study hygiene” checklist: how to avoid overclaiming from one paper, and how to ask about limitations, sample, and replicability.
5) A quote integration plan: how to use expert quotes to explain, not to decorate, and how to attribute properly.
Output: Outreach + follow-up + thank-you templates + prep sheet + study hygiene checklist + quote integration plan.
123. Skeptic/Contrarian Research Plan
Find the best critiques so your piece is smarter.
Category: Research
Inputs:
– Angle: [ANGLE]
– Key claims: [CLAIMS]
– Industries/actors involved: [ACTORS]
– Where the hype is loudest: [HYPE_ZONES]
Task: Create a skeptic research kit:
1) A query pack of 25 searches designed to surface criticism: “failure”, “audit”, “bug”, “lawsuit”, “regulator”, “whistleblower”, “replication”, “negative results”, “cost overruns”, “harm.” Include keyword variants for policy, economics, and safety.
2) A source hierarchy: which skeptical sources are strongest (audits, court docs, regulator reports, peer review, meta-analyses) vs weaker (hot takes, anonymous threads).
3) A critic shortlist framework: 8 critic types to interview and what each can reveal (e.g., implementers, safety researchers, unions, consumer advocates, competitors).
4) A “critique integration” plan: where in the story to introduce skepticism and how to avoid false balance while still being fair.
5) A checklist for evaluating critiques: incentives, evidence quality, generalizability, and whether critique addresses the core claim.
Output: Query pack + hierarchy + critic types + integration plan + critique evaluation checklist.
124. Interactive/Multimedia Add‑Ons Editors Approve
Suggest extras that add value, not complexity.
Category: Data & Visuals
Inputs:
– Article angle: [ANGLE]
– Reader goal: [READER_GOAL]
– Data/assets available: [ASSETS]
– Constraints (no dev, minimal dev, I can build): [CONSTRAINTS]
Task: Propose 10 add-on ideas, then fully spec the best 2:
1) For each idea: what problem it solves, where it goes in the story, what inputs/data it needs, and what risk it has (misleading, privacy, overclaiming).
2) Pick the best 2 given constraints and produce an “editor brief” for each: purpose, UX flow, required content/assets, verification notes, and accessibility considerations.
3) Provide copy blocks: a 1–2 sentence intro line that invites readers to use the add-on, and a disclaimer line if necessary (privacy, uncertainty, not advice).
4) Provide a pitch paragraph offering the add-ons in a non-pushy way (“Optional add-on: …”).
Output: 10 add-on ideas + 2 detailed briefs + copy blocks + pitch paragraph.
125. Draft Sprint: Zero Draft → Clean Draft in 2 Hours
A step-by-step writing session you can repeat.
Category: Drafting
Inputs:
– Outline (paste): [OUTLINE]
– Key evidence assets: [EVIDENCE]
– Best quote(s): [QUOTES]
– Target word count: [WORD_COUNT]
– Biggest sticking point: [STUCK_POINT]
Task: Build a 2-hour drafting script with exact timers:
1) 10 minutes: write the reader promise + nut graf (use placeholders [INSERT_PROOF] where needed).
2) 20 minutes: write section 1–2 with one proof unit each (quote/data/doc).
3) 5 minutes break + quick review: mark signals of overclaiming.
4) 25 minutes: write the mechanism/explainer section using the “explain → example → limitation” pattern.
5) 5 minutes break + add skeptic lane bullets.
6) 25 minutes: write the skeptic/critics section + your response (fair, evidence-aware).
7) 20 minutes: write the ending using “implication + what to watch next” without predictions.
8) 10 minutes: run a mini QA: highlight every claim and add [ADD_SOURCE] placeholders, then produce a link pack list.
Also provide a “what to do next” revision plan for tomorrow (structure, line edit, fact-check table).
Output: A timed sprint script + placeholders rules + next-day revision plan.
126. Portfolio Upgrade: Case‑Study Clip Page Builder
Turn one clip into a proof-of-skill page editors love.
Category: Career
Inputs:
– Clip title + link: [CLIP]
– Publication: [PUBLICATION_NAME]
– What I reported: [REPORTING_SUMMARY]
– Evidence used (docs/data/interviews): [EVIDENCE_USED]
– Outcome signals (comments, shares, editor feedback): [OUTCOMES]
Task: Create a clip case-study page blueprint with:
1) A 2-sentence positioning statement (“I report stories about…”).
2) A “project summary” block (what question the story answered, why it mattered, what readers learned).
3) A reporting process block: interview types, documents/datasets, verification method, and how you handled uncertainty.
4) A “skills demonstrated” block: reporting, sourcing, data work, narrative craft, ethics—mapped to concrete actions.
5) A “what I’d do next” block: 2–3 follow-up angles (shows momentum).
6) A pitch-ready signature block + 3 pitch lines referencing this clip without sounding braggy.
Output: Case-study page copy + follow-up angles + signature block + 3 pitch lines.
127. Cold‑Open Scene Builder (Truthful Texture)
Write a vivid opening without inventing details.
Category: Writing Craft
Inputs:
– Where the scene happens: [LOCATION_OR_CONTEXT] (physical place or online space)
– Who is present: [CHARACTERS] (roles, not private info)
– What happened (facts only): [SCENE_FACTS]
– Artifacts I can cite (screenshots, logs, emails, manuals): [ARTIFACTS]
– One verified quote I can use (optional): [QUOTE]
– Sensitivities (privacy, safety): [SENSITIVITIES]
Task: Create 3 cold open options (each 210–280 words) with different approaches:
1) “Moment of friction” open (something doesn’t work / a decision point).
2) “Quiet detail” open (small, concrete detail that hints at the larger system).
3) “Clock starts” open (a deadline, a change, a rule, a launch, a policy—without hype).
For each option: (a) include 3–6 texture hooks (sound, UI labels, timestamps, object details) using placeholders if needed, (b) embed a subtle stakes line, and (c) end with a bridge sentence that tees up the nut graf (“This is what [TOPIC] looks like up close…”).
Bonus: Give me a “truth checklist” for scenes: what I can safely describe, what must be attributed, and what I must not infer (emotions, motives).
Output: 3 cold opens + bridge lines + truth checklist.
128. Source Diversity Plan (Avoid One‑POV Reporting)
Prevent ‘single‑side’ stories by design.
Category: Sourcing
Inputs:
– Core claim / question: [CORE_QUESTION]
– Stakeholders already in my draft: [CURRENT_SOURCES]
– Who is affected (users/workers/communities): [AFFECTED_GROUPS]
– Geographic scope: [SCOPE]
– Sensitivity level: [SENSITIVITY]
– Deadline + constraints: [DEADLINE] / [CONSTRAINTS]
Task: Build a “source diversity blueprint” with:
1) A source matrix with at least 10 source types (supporters, neutral experts, implementers, regulators, critics, affected people, auditors, historians, competitors, frontline workers). For each: what they know, what they may bias, and what proof they can provide (docs/data/demos).
2) A minimum sourcing bar for this story (e.g., at least [N] sources across [K] buckets) and a “stop condition” (when to stop interviewing and start writing).
3) Outreach scripts: 3 short messages tailored to (A) expert, (B) affected person, (C) company/PR—each with consent clarity and a respectful ask.
4) A triangulation guide: how to confirm sensitive claims using at least 2 independent methods.
5) A fairness checklist: questions to ask each side so no one gets an easy ride, plus how to write “right of reply” honestly.
Output: Source matrix + minimum bar + scripts + triangulation guide + fairness checklist.
129. Numbers‑to‑Narrative Translator (Data → Meaning)
Turn stats into reader understanding without overclaiming.
Category: Data & Visuals
Inputs:
– Data points I plan to use (paste): [DATA_POINTS]
– Source(s) for the numbers: [DATA_SOURCES]
– The claim I want to support: [CLAIM]
– Audience question: [READER_QUESTION]
– Scope + time range: [SCOPE] / [TIME_RANGE]
Task: Create a “numbers‑to‑narrative pack”:
1) For each number: define unit, denominator, time range, and what comparison makes it meaningful (baseline, per‑capita, before/after, peer group). Flag missing denominators.
2) Write 3 levels of explanation for the key stat: (A) 1 sentence (simple), (B) 3 sentences (with context), (C) 1 short paragraph (with caveat).
3) Provide 12 safe phrasing templates that avoid overclaiming (“suggests,” “is consistent with,” “within this dataset…”).
4) Identify the most likely misunderstanding a reader will have and write a “myth vs reality” correction (2–3 bullets).
5) Recommend 2 chart/callout ideas and write caption + alt‑text templates with placeholders.
6) Provide a “data integrity checklist” for my draft: cherry‑picking, selection bias, definition changes, missing values, and correlation vs causation traps.
Output: Stat definitions + 3 explanation levels + safe phrasing templates + myth corrections + chart captions + integrity checklist.
130. Red‑Team Your Draft (Hostile Reader Test)
Stress‑test your story like the internet will.
Category: Editing
Inputs:
– Draft excerpt or outline (paste 700–1,500 words): [DRAFT]
– My source list/notes: [SOURCES]
– High‑risk claims: [HIGH_RISK]
– Entities mentioned (companies/people): [ENTITIES]
Task: Run a red‑team review with these deliverables:
1) “Attack summary”: 10 strongest objections a hostile but smart reader could raise (accuracy, scope, causality, missing context, bias).
2) “Vulnerability map”: highlight exact sentences that are most attackable; label each as: unsourced claim / ambiguous timing / implied intent / overgeneralization / framing bias / missing counterexample.
3) “Repair kit”: for each vulnerability, give (a) a safer rewrite, and (b) a proof plan (doc/data/interview) using placeholders [ADD_SOURCE] / [ADD_QUOTE].
4) “Fairness pass”: identify where you should add critics or limits, and where adding them would be false balance.
5) “Defamation caution”: mark statements that should be attributed or softened, and provide neutral rewrites.
6) A final checklist I can run before submission (names, numbers, dates, attribution, links, uncertainty).
Output: Objections + vulnerability map + repair kit + fairness guidance + final checklist.
131. Defamation‑Safe Language Rewrites (Risk Phrases)
Keep strong reporting while reducing legal/ethical risk.
Category: Ethics
Inputs:
– Risky sentences (paste 10–40 lines): [RISKY_LINES]
– Evidence I have for each line (paste): [EVIDENCE]
– What I truly observed vs inferred: [OBSERVED_VS_INFERRED]
– Who I contacted for comment: [RIGHT_OF_REPLY]
Task: Produce a “risk rewrite packet”:
1) For each risky sentence, label the risk type: allegation, intent, certainty leap, broad generalization, loaded term, missing attribution.
2) Provide 2 rewrites: (A) cautious‑but‑clear, (B) attribution‑forward (e.g., “According to [SOURCE]…”). Preserve meaning but remove overclaiming.
3) For each line, specify the minimum evidence needed to keep the stronger version (docs, court filings, audits, public statements, recorded interviews). Use [NEED_DOC] placeholders if missing.
4) Create a “loaded words” swap list: 25 high‑risk words and safer alternatives, with notes on when each is appropriate.
5) Provide right‑of‑reply integration: where to include the company/person response, and how to phrase “declined to comment” neutrally.
6) End with a short “accuracy pledge” paragraph I can keep in my workflow: what I will and won’t claim.
Output: Labeled risks + 2 rewrites per line + evidence requirements + swap list + reply integration guidance + pledge.
132. Anonymous Source Protocol (Ethical + Clear)
Handle anonymity like a professional newsroom.
Category: Ethics
Inputs:
– What the source knows: [SOURCE_KNOWLEDGE]
– Risk if identified: [RISK]
– Claims they made (paste): [CLAIMS]
– Corroboration I have (docs/other sources): [CORROBORATION]
– What I can disclose (role, industry, region): [SAFE_DESCRIPTORS]
Task: Build an anonymity protocol with:
1) A decision framework: when to grant anonymity vs when to refuse (and why). Include a “minimum corroboration bar.”
2) A terms‑of‑use script: how I explain attribution categories and recording consent in plain English.
3) A corroboration plan: at least 6 ways to verify the claims without exposing the source (documents, second source, public records, logs, expert validation).
4) A descriptor builder: 12 safe ways to describe the anonymous source without identifying them (avoid unique job titles, exact dates, tiny teams).
5) A quote handling policy: when to quote directly vs paraphrase, how to avoid “anonymous says” feeling flimsy, and how to attribute responsibly.
6) A short editor note template explaining why anonymity was granted and what corroboration was done (without revealing details).
Output: Decision framework + scripts + corroboration plan + safe descriptors + quote policy + editor note.
133. Product Demo Verification Plan (No ‘Magic Demos’)
Verify product claims without being tricked by demos.
Category: Fact-Checking
Inputs:
– Product/tech: [PRODUCT_OR_TECH]
– Claims I might repeat: [CLAIMS]
– Demo offer details: [DEMO_DETAILS]
– Independent benchmarks I can cite (if any): [BENCHMARKS]
– Constraints (time/tools): [CONSTRAINTS]
Task: Create a “demo verification protocol” with:
1) Pre‑demo checklist: what I need upfront (version, settings, dataset, success criteria, time limits).
2) “Trap questions” (polite): what would break it, failure cases, edge cases, and where it performs poorly.
3) Test plan: 8 tests I can request or simulate, including one adversarial test and one fairness/bias test (if relevant). Use placeholders for data inputs [TEST_INPUT] and expected outputs [EXPECTED_OUTPUT].
4) Evidence requests: logs, documentation, methodology, audit reports, reproducible steps, and what counts as acceptable proof.
5) Writing guidance: safe language if I cannot verify (how to attribute, how to describe limits, what not to claim). Provide 10 sentence templates.
6) Editor packet: what to document (screenshots, timestamps, settings) and how to store notes for fact‑checking.
Output: Pre‑demo checklist + trap questions + test plan + evidence requests + safe language templates + editor packet.
134. Jargon Control System (Glossary + Gatekeeper)
Keep precision while staying readable for beginners.
Category: Style
Inputs:
– Draft excerpt (paste 600–1,200 words): [DRAFT_EXCERPT]
– Terms/acronyms I must use: [MUST_USE_TERMS]
– Terms I’m unsure about: [CONFUSING_TERMS]
– Audience level: [AUDIENCE_LEVEL]
Task: Produce a jargon control pack:
1) Extract 20–40 terms from my excerpt and classify: essential / optional / remove.
2) Create a mini glossary: term → 1‑sentence definition → “common misunderstanding” → “how I’ll use it in this article.” Add citation placeholders where needed.
3) Rewrite my excerpt to reduce jargon density: define terms once, replace repeated acronyms, and convert abstract nouns into concrete verbs where possible. Mark any spot requiring verification with [VERIFY] or [ADD_SOURCE].
4) Give me “gatekeeper rules” I can run on the whole draft: max acronyms per section, first‑use definition, and a rule for replacing buzzwords (“leveraging,” “disruption,” etc.).
5) Provide 12 “precision sentences” that let me hedge correctly (what is known, what is debated, what is uncertain).
Output: Term list + glossary + rewritten excerpt + gatekeeper rules + precision sentences.
135. Hook Menu (10 Openers, No Clickbait)
Generate openings that earn attention and match the reporting.
Category: Writing Craft
Inputs:
– Angle: [ANGLE]
– Strongest verified detail/scene/quote: [STRONG_DETAIL]
– The central tension/tradeoff: [TENSION]
– Who it affects: [STAKEHOLDERS]
– What is uncertain/debated: [UNCERTAINTY]
Task: Produce a “hook menu” of 10 opening options, each 80–120 words, using different hook types:
1) Micro‑scene hook (truthful texture).
2) Contradiction hook (“It works—until it doesn’t…”).
3) Number hook (with caveat line).
4) Question hook (only if answerable by the piece).
5) “Hidden system” hook (incentives, supply chain, policy).
6) Human impact hook (without exploitation).
7) “Tool/feature you didn’t notice” hook (interface/behavior).
8) Timeline hook (quiet shift over time).
9) Skeptic hook (best criticism upfront).
10) “What broke” hook (failure mode).
After the 10 hooks, write 3 nut graf options (60–90 words each) that connect hook → thesis → reader payoff. Then give a “hook truth test” checklist (10 items) to ensure the opener is supported by later evidence.
Output: 10 hooks + 3 nut grafs + hook truth test checklist.
136. ‘So What?’ Pass (Section Value Check)
Make every section earn its place.
Category: Structure
Inputs:
– Reader promise (one sentence): [READER_PROMISE]
– Outline/headings (paste): [OUTLINE]
– Key evidence assets: [EVIDENCE_ASSETS]
– Word count target: [WORD_COUNT]
Task: Run a “So what?” audit and return:
1) A section-by-section table: section → job → “so what?” answer → proof needed → keep/cut/merge → suggested word budget.
2) Identify redundancies and propose a tighter structure (7–10 sections).
3) For sections that stay, write a 1‑sentence “value line” I can put at the top of the section to keep focus.
4) For sections that are weak, propose replacements that better serve the reader promise (e.g., add skeptic lane, add mechanism explainer, add human impact with consent, add data caveat).
5) Provide transition beats: how each section hands off to the next (momentum without filler).
6) End with a “reader payoff checklist”: 12 questions a reader should be able to answer after reading. Mark which section delivers each payoff.
Output: So‑what table + revised structure + value lines + replacements + transitions + payoff checklist.
137. Conflict‑of‑Interest Self‑Audit + Disclosure Lines
Protect credibility before an editor asks.
Category: Ethics
Inputs:
– Anything I might disclose (list): [POTENTIAL_COI]
– How I accessed sources/products: [ACCESS_METHOD]
– Any compensation/freebies: [COMPENSATION]
– Affiliate intent (yes/no/unsure): [AFFILIATE]
– Publication norms (if known): [POLICY_NOTES]
Task: Create a COI audit + disclosure pack:
1) COI risk assessment: categorize each item as high/medium/low risk and explain why it matters (perception + actual bias).
2) Mitigation plan: what steps restore trust (extra sourcing, third‑party verification, editor note, avoid certain claims, avoid affiliate placement).
3) Write 10 disclosure sentences for common scenarios (previous client, free product sample, travel paid, affiliate links, personal connection). Mark the safest 3.
4) Provide a “what to tell the editor” checklist: what to disclose privately even if not public, and how to phrase it calmly.
5) Provide a reader‑facing disclosure block template (40–80 words) with placeholders, plus a shorter 1‑sentence version for a footer.
Output: COI risk table + mitigation plan + disclosure lines + editor checklist + reader disclosure blocks.
138. Follow‑Up Strategy (Value‑Add, Not Nagging)
Follow up with something useful (not just ‘checking in’).
Category: Career
Inputs:
– Outlet: [PUBLICATION_NAME]
– Pitch summary: [PITCH_SUMMARY]
– Date sent: [DATE_SENT]
– New developments (if any): [NEW_PEG]
– Extra proof I can add (docs/data/quotes): [EXTRA_PROOF]
– My constraints: [CONSTRAINTS]
Task: Create a follow-up kit:
1) A follow-up schedule (Day 4, Day 10, Day 18) with stop conditions and respectful language.
2) Three value‑add follow-up emails (each 90–130 words), each using a different add‑on: (A) new proof, (B) tighter angle/outline, (C) alternate packaging (newsletter/service version). Include 6 subject line options total.
3) A “micro‑update menu”: 10 small improvements I can add in 10 minutes to justify a follow‑up (one new source, one statistic, one scene detail, one counterargument).
4) A polite “no response” closeout email that preserves goodwill and invites future ideas.
5) A short tracking table schema: outlet, pitch, follow‑ups, next action, status, lesson learned.
Output: Schedule + 3 value‑add emails + subject lines + micro‑update menu + closeout email + tracking schema.
139. Editor Q&A Pre‑Buttal Sheet (Answer Objections)
Anticipate what editors ask before they reply.
Category: Pitch Writing
Inputs:
– Pitch draft (paste): [PITCH_DRAFT]
– My access (truthful): [ACCESS]
– Proof I have: [PROOF_HAVE]
– Proof I can get: [PROOF_CAN_GET]
– Target length: [WORD_COUNT]
Task: Create a Q&A pre‑buttal pack:
1) List the top 15 editor questions for this pitch, grouped by: fit, novelty, feasibility, proof, risk, and reader payoff.
2) Write strong answers for each question using only my inputs; where missing, insert [MISSING_INFO] placeholders with a specific action (“get audit report,” “interview regulator”).
3) Identify the 5 most dangerous unanswered questions and tell me how to fix them before sending.
4) Rewrite my pitch into a version that quietly answers those questions (200–240 words) + a mini-outline (6 bullets with evidence placeholders).
5) Provide a final “send confidence score” with 3 improvements that would raise it fastest.
Output: Q list + answers + danger gaps + rewritten pitch + confidence score + improvement plan.
140. Revision Ladder (Macro → Micro)
A step-by-step revision system editors recognize.
Category: Editing
Inputs:
– Draft (paste): [DRAFT]
– Reader promise: [READER_PROMISE]
– Source list: [SOURCES]
– Deadlines: [DEADLINE]
Task: Produce a revision ladder with deliverables at each rung:
1) Macro structure: identify the spine, reorder sections if needed, and write a revised outline with section jobs + word budget.
2) Logic and flow: mark places where causality is implied; propose transitions and clarify timelines.
3) Evidence pass: list every paragraph’s proof need (quote/data/doc) and where it’s missing; create a mini claim table (15–25 rows).
4) Clarity pass: simplify jargon, tighten long sentences, and ensure each section answers “so what?”
5) Voice pass: improve rhythm and remove filler while keeping accuracy and restraint.
6) Final polish checklist: names, numbers, dates, attributions, links, and uncertainty language.
Also provide a 2‑day revision schedule: what I do in 60‑minute blocks to finish on time.
Output: Revised outline + flow notes + evidence gaps + mini claim table + style edits + polish checklist + 2‑day schedule.
141. Sensitive Topic Tone Calibration (Respect + Clarity)
Write about harm, safety, or identity without exploiting anyone.
Category: Style
Inputs:
– Draft excerpt or outline: [DRAFT_OR_OUTLINE]
– Who is affected: [AFFECTED_GROUPS]
– Sensitive elements present: [SENSITIVE_ELEMENTS]
– Consent/recording notes (if any): [CONSENT_NOTES]
– Safety risks (doxxing, retaliation): [SAFETY_RISKS]
Task: Provide a tone calibration pack:
1) Flag language risks: sensational words, victim framing, unnecessary graphic detail, moralizing, identity labels used carelessly, and privacy leaks.
2) Rewrite risky passages into safer versions that keep meaning and specificity. Use placeholders [REDACT] / [SAFE_DESCRIPTOR] where necessary.
3) Provide a “harm‑aware structure” plan: where to place content warnings (if needed), how to foreground agency, and how to keep the focus on systems and evidence rather than voyeurism.
4) Create a checklist for interview usage: consent clarity, anonymity options, quote approval policy (what you can and can’t promise), and when to paraphrase instead of quote.
5) Provide 12 safe phrasing templates for describing uncertainty, responsibility, and allegations without declaring guilt.
Output: Risk flags + rewritten safer passages + structure guidance + interview checklist + safe phrasing templates.
142. Citation Hygiene + Link Pack Standard
Make your sources easy for editors to verify.
Category: Fact-Checking
Inputs:
– Source list (paste URLs + notes): [SOURCE_LIST]
– Draft excerpt (paste): [DRAFT_EXCERPT]
– Required citation style (if known): [STYLE]
– Any paywalled sources: [PAYWALLS]
Task: Create a citation hygiene pack:
1) Convert my source list into a standardized link pack: title, publisher, date, URL, what it supports, credibility note (primary/secondary), and “best quote/figure” location if applicable.
2) Map sources to draft: for each paragraph, list which source supports it; flag paragraphs with no source and add [NEED_SOURCE] actions.
3) Detect weak sourcing patterns: press releases only, single-source claims, outdated sources, circular citations. Recommend stronger alternatives by source type (audit, filing, dataset, peer review).
4) Provide a “citation sentence template” library: how to attribute statistics, timelines, and study findings in clear prose without clutter.
5) Build a 10‑minute pre‑submit link check routine: archive steps, screenshot critical pages, note retrieval date, and store PDFs/notes.
Output: Clean link pack + paragraph mapping + weak-source flags + attribution templates + link-check routine.
143. Audience Intent Map (Feature Fit + Reader Promise)
Align your story with what readers actually came for.
Category: Publication Fit
Inputs:
– Topic: [TOPIC]
– Outlet vibe: [OUTLET_VIBE]
– What the reader already knows: [BASELINE_KNOWLEDGE]
– What I can report/verify: [REPORTING_ASSETS]
– Primary angle: [ANGLE]
Task: Build an intent map with:
1) 3 audience personas (curious beginner, informed practitioner, skeptical critic) with what they fear, want, and misunderstand.
2) A “reader question ladder”: 12 questions in the order readers will ask them (what is it → how works → who wins/loses → what changes → what to watch).
3) A myth map: 8 common misconceptions and the evidence needed to correct each (placeholders allowed).
4) A recommended format: Feature / Service / Business / Culture / Science, with reasons.
5) A draft outline that follows the question ladder, including where to place: scene, explainer, skeptics, and data caveats.
6) A 1‑sentence reader promise and a 2‑sentence “why now” that avoids breaking‑news dependency.
Output: Personas + question ladder + myth map + format recommendation + outline + reader promise + why now.
144. Service Article Testing Protocol (Hands‑On Verification)
Prove your how‑to advice actually works.
Category: Service/How-To
Inputs:
– Proposed steps (paste): [STEPS]
– Devices/tools/software versions: [ENVIRONMENT]
– Reader constraints (budget/privacy/time): [CONSTRAINTS]
– Safety/ethical concerns: [SAFETY]
– Common failures I’ve seen: [FAILURES]
Task: Create a service testing pack:
1) A test matrix: platform/device/version, prerequisites, and what success looks like.
2) A step verification checklist: for each step, what I must confirm (screenshots, settings path, expected result). Use placeholders like [INSERT_SCREENSHOT] / [EXPECTED_RESULT].
3) Edge cases: at least 10 likely failure cases and how the guide should handle them (warnings, alternatives, “stop here”).
4) A transparency block: what I tested personally vs what is based on docs, and how to disclose limitations.
5) Rewrite my steps into a WIRED‑style format: short headings, numbered steps, “why this matters” lines, and safety notes.
6) A final “reader success” checklist: 8 items readers should have by the end (settings, outcomes, sanity checks).
Output: Test matrix + verification checklist + edge cases + transparency block + rewritten steps + reader success checklist.
145. SEO‑Safe Packaging (Deck, Meta, Social) Without Selling Out
Make your story discoverable while staying magazine‑quality.
Category: Headlines
Inputs:
– Reader promise: [READER_PROMISE]
– Angle: [ANGLE]
– Keywords (primary + secondary): [KEYWORDS]
– Risky words to avoid: [BANNED_WORDS]
– Audience: [AUDIENCE]
Task: Build a packaging kit:
1) 12 headline options in three buckets (clarity-first, curiosity-first, contradiction). Mark the best 3 for editor safety.
2) 8 deck/subhead options (1–2 sentences) that add stakes and specificity without hype.
3) 6 meta descriptions (140–160 chars) that incorporate keywords naturally and remain truthful.
4) Social copy pack: 6 tweets/posts, 3 LinkedIn blurbs, 3 newsletter blurbs. Include a “what you’ll learn” style and a “tradeoff” style.
5) A “truth test”: for the top headline, list the exact sections the article must contain. If missing, add [ADD_SECTION] notes.
6) A mini SEO sanity checklist: search intent match, keyword stuffing avoidance, and how to keep voice magazine‑friendly.
Output: Headlines + decks + meta descriptions + social pack + truth test + SEO sanity checklist.
146. Art Desk Brief (Illustration/Photo Plan)
Give editors a visual plan that supports the story’s meaning.
Category: Data & Visuals
Inputs:
– Article angle: [ANGLE]
– Key sections/beats: [BEATS]
– Visual assets I can provide (screenshots, data, locations): [ASSETS]
– Consent constraints (faces, private info): [CONSENT_CONSTRAINTS]
– Style preference (photo/illustration/diagram): [STYLE_PREF]
Task: Create an art brief with:
1) 10 visual concepts ranked by impact vs effort. For each: what it explains, where it goes in the story, and required assets.
2) Pick the best 3 concepts and write full briefs: objective, composition, labels/annotations, accessibility notes, and verification notes (what must be accurate).
3) Provide an “asset request list” for me: what screenshots/data/permissions to gather, with file naming rules and privacy redaction guidance.
4) Write caption templates (short + medium) and alt‑text templates for each chosen visual, using placeholders like [INSERT_LABEL] and [INSERT_NUMBER].
5) Provide a note on legal/ethical boundaries: when not to show logos, faces, private dashboards, or minors, and how to blur/redact safely.
Output: 10 ranked visuals + 3 full art briefs + asset request list + captions/alt‑text + ethics boundaries.
147. Post‑Publish Impact Tracker + Follow‑Up Pitch Generator
Turn one published piece into your next 3 pitches.
Category: Career
Inputs:
– Clip link/title: [CLIP]
– Key insight of the piece: [KEY_INSIGHT]
– Sources/interviews I gathered: [REPORTING_ASSETS]
– Audience reactions (if any): [REACTIONS]
– Time I can spend per week: [TIME_PER_WEEK]
Task: Create a post‑publish pack:
1) Impact tracker table: metrics that matter (editor feedback, newsletter pickup, inbound leads, credible shares), plus a note on what metrics to ignore.
2) A 7‑day lightweight promotion plan: who to notify (sources, communities, experts), what to post, and what not to do. Include privacy and consent reminders.
3) A “follow‑up pitch generator”: produce 6 follow-up angles based on my reporting assets (systems angle, policy angle, service angle, human impact angle). For each: what’s new, what proof I already have, what new proof I need, and best outlet category fit.
4) Write 3 complete pitch emails (190–230 words) for the best 3 follow-ups, using placeholders [INSERT_PROOF] and [INSERT_NEW_SOURCE].
5) A relationship plan: how to thank the editor, ask for feedback, and propose a second idea without being pushy.
Output: Impact tracker + 7‑day plan + 6 follow-up angles + 3 pitch emails + relationship plan.
148. Newsletter/Trend Mining → Pitchable Angles
Turn trend noise into 6 reportable story angles.
Category: Research
Inputs:
– Trend bullets (paste): [TREND_BULLETS]
– Target publication: [PUBLICATION_NAME]
– Geographic scope: [SCOPE]
– My access (people/products/docs): [ACCESS]
– Word count range: [WORD_COUNT]
Task: Produce a trend‑to‑pitch kit:
1) Cluster the bullets into 4–6 “themes” and name each theme in plain English (no buzzwords).
2) For the best 6 angles, write: (a) a one‑sentence hook, (b) the core question, (c) what’s new/underrated about it, (d) a 6‑bullet outline with an evidence placeholder per bullet, and (e) the “why now” peg that does not rely on breaking news.
3) For each angle, list the minimum reporting required: 3 interview target types + 3 document/data targets + 1 skeptic source type, with notes on what each can confirm.
4) Flag “content farm traps” (angles that are too generic) and propose a reporting twist to make them magazine‑worthy.
5) Choose the strongest 2 angles for [PUBLICATION_NAME] and write 2 pitch emails (190–230 words) with clean subject lines and honest access statements.
Output: Themes + 6 angles + reporting plans + trap fixes + 2 finished pitches.
149. Reverse Outline → Fix a Messy Draft Fast
Diagnose structure by outlining what you actually wrote.
Category: Editing
Inputs:
– Draft text (paste): [DRAFT]
– Reader promise (one sentence): [READER_PROMISE]
– Word count goal: [WORD_COUNT]
– Must‑keep sections (if any): [MUST_KEEP]
Task: Provide a reverse‑outline repair kit:
1) Reverse outline table: paragraph/section → what it says → what job it serves (scene/explainer/proof/stakes/skeptic/how‑to) → keep/move/cut/merge.
2) Identify the spine: the single through‑line your draft is trying to prove. If it’s missing, propose 2 alternative spines that match my inputs.
3) Redundancy map: list repeated points and propose where each should live once, with a better supporting proof unit.
4) Rebuild outline: a cleaner 7–10 section outline with section jobs, word budget, and one evidence placeholder per section ([INSERT_QUOTE]/[INSERT_DATA]/[INSERT_DOC]).
5) Transition lines: 1–2 sentence bridges between sections that maintain momentum without fluff.
6) “So what” test: for each rebuilt section, write a one‑sentence payoff that aligns with [READER_PROMISE].
Output: Reverse outline + spine + redundancy fixes + rebuilt outline + transitions + payoff lines.
150. Quote Discipline (No Quote Soup)
Use quotes as evidence and character—not decoration.
Category: Writing Craft
Inputs:
– Quotes/notes (paste): [QUOTES]
– Draft section using them (paste): [SECTION_DRAFT]
– Attribution constraints (on record/background): [ATTRIBUTION_RULES]
– What each interview was meant to prove: [PURPOSE]
Task: Create a quote‑use system:
1) Classify each quote by job: evidence, explanation, emotion, conflict, confession/limitation, vivid detail, authority, counterargument.
2) Select the best [N] quotes and justify why each earns space. Identify quotes to paraphrase instead and explain how to paraphrase safely.
3) Rewrite my section with improved quote handling: set up → quote → unpack → connect to claim. Keep the meaning and add [ADD_SOURCE] markers where a quote implies a factual claim needing verification.
4) Provide 12 “quote verbs” and framing patterns that avoid cliché (“said”) overload while staying neutral (avoid “admitted” unless accurate).
5) Add a “quote ethics” checklist: consent, anonymity, context, avoiding doxxing, and not using vulnerable sources as props.
6) End with a micro rubric: does every quote move the story forward? If not, cut or paraphrase.
Output: Quote job map + keep/cut list + rewritten section + framing patterns + ethics checklist + rubric.
151. Attribution Upgrade (Who Says What, Clearly)
Make claims traceable and fair with better attribution.
Category: Fact-Checking
Inputs:
– Fuzzy lines (paste): [FUZZY_LINES]
– Source list (paste): [SOURCES]
– Entities involved: [ENTITIES]
– Risk level: [RISK_LEVEL]
Task: Produce an attribution upgrade pack:
1) For each line, label it as: verified fact / claim by party / inference / speculation / value judgment. Explain why the label fits.
2) Rewrite each line into 2 versions: (A) evidence‑forward, (B) attribution‑forward. Use placeholders like [CITE_DOC], [CITE_DATA], [CITE_INTERVIEW] where needed.
3) Provide a “how they know” clause library: 12 short add‑ons (“based on filings,” “according to internal emails reviewed by…,” “from usage logs shared with…”) that are accurate when true.
4) List the top 8 attribution mistakes beginners make and how to avoid them (anonymous “experts,” vague “studies show,” “critics say” without naming the critic type).
5) Add a fairness note: where to include responses from the subject of criticism and how to summarize them neutrally.
Output: Labels + 2 rewrites per line + clause library + common mistakes + fairness note.
152. Public Records + FOIA Plan (Beginner‑Friendly)
Find primary documents legally and ethically.
Category: Research
Inputs:
– Jurisdiction(s): [JURISDICTION]
– Entities involved: [ENTITIES]
– What I’m trying to verify: [VERIFICATION_GOALS]
– Deadline constraints: [DEADLINE]
– Sensitivity (personal data, minors): [SENSITIVITY]
Task: Create a public records kit:
1) A “record map”: 10 record types likely relevant (filings, permits, audits, enforcement actions, procurement, court docs, regulator guidance, inspection reports, ownership records, datasets). For each: where to find it and what it can prove.
2) A search plan: 20 search queries and site targets (agencies/courts/portals) customized to [JURISDICTION]. Use placeholders for entity names and case numbers.
3) A FOIA/RTI request template (short, precise): what to ask for, date ranges, how to narrow, fee language, and contact details placeholders.
4) A timeline plan: what records can be found today vs what takes weeks, and how to write the story if records arrive late (safe language + update plan).
5) A privacy and ethics checklist: redactions, avoiding doxxing, and how to cite records responsibly.
Output: Record map + query pack + request template + timeline plan + privacy checklist.
153. Translation/Localization Accuracy Protocol
Use translated materials without introducing errors.
Category: Fact-Checking
Inputs:
– Original language + excerpt (paste): [ORIGINAL_EXCERPT]
– My translation (paste): [MY_TRANSLATION]
– Technical domain: [DOMAIN]
– Publication language: [OUTPUT_LANGUAGE]
– Names/terms that must stay exact: [MUST_KEEP_TERMS]
Task: Build a translation accuracy pack:
1) Identify high‑risk terms and ambiguous phrases, and propose a glossary table: original term → literal translation → best journalistic translation → notes.
2) Create a verification workflow: when to use machine translation, when to seek native speaker review, and how to cross‑check with secondary sources.
3) Provide safe attribution templates: “translated from [LANG] by the author,” “according to a translation reviewed by [REVIEWER_ROLE],” and how to note uncertainty.
4) Rewrite my translation into a clearer, more accurate version (if possible) while preserving meaning; mark uncertain phrases with [UNCERTAIN].
5) Give a “cultural context” checklist: what readers may misread, how to explain local terms, and how to avoid stereotypes.
Output: Term glossary + verification workflow + attribution templates + revised translation + context checklist.
154. Startup Claims Due Diligence (Revenue, Users, ‘AI’)
Verify hype claims with realistic evidence.
Category: Fact-Checking
Inputs:
– Startup claims (paste): [CLAIMS]
– Proof provided (decks, screenshots, docs): [PROOF]
– Public info available (site, filings): [PUBLIC_INFO]
– Competitors/comparisons: [COMPARISONS]
– Right‑of‑reply status: [COMMENT_REQUEST]
Task: Produce a claims verification kit:
1) Break claims into types: user count, revenue, retention, performance benchmarks, AI capability, customer outcomes, security/privacy, market leadership.
2) For each claim, list acceptable evidence tiers: strongest (audits/filings/contracts), medium (customer references + metrics logs), weak (press release). Explain what each tier can and cannot prove.
3) Create a question list for founders/PR (15 questions) and a separate list for customers (10 questions) focused on measurable outcomes, not vibes.
4) Provide safe‑language rewrites for unverified claims (10 sentence templates), including how to attribute and how to avoid “best in class” fluff.
5) Create a “red flags” checklist: vanity metrics, cherry‑picked demos, unclear denominators, unverifiable benchmarks, and “AI washing.”
Output: Claim breakdown + evidence tiers + question lists + safe rewrites + red flags checklist.
155. Study Quality Evaluator (Science Without Hype)
Assess studies and write honest conclusions.
Category: Research
Inputs:
– Study title/abstract (paste): [STUDY_TEXT]
– My intended claim: [INTENDED_CLAIM]
– Domain: [DOMAIN]
– Reader level: [AUDIENCE_LEVEL]
Task: Create a study evaluation pack:
1) Summarize what the study actually tested, in 3 bullets, and what it did not test (limits).
2) Evaluate quality signals: sample size, controls, confounders, measurement validity, preregistration (if applicable), replication status, funding/conflicts, peer‑review status.
3) Provide 3 possible write‑ups: (A) cautious 1‑sentence, (B) 3‑sentence with context, (C) 120–160 word explainer with caveat block.
4) List 8 questions to ask an independent expert about this study (what headlines get wrong, generalizability, effect size vs significance).
5) Provide safe attribution and uncertainty language templates to avoid certainty leaps.
6) Tell me whether my intended claim is supported, partially supported, or not supported, and give a defensible replacement claim if needed.
Output: What it tested + quality eval + 3 write‑ups + expert questions + safe templates + defensible claim.
156. Narrative Beat Sheet (Feature Momentum)
Plan beats so the feature keeps moving.
Category: Structure
Inputs:
– Angle/thesis: [ANGLE]
– Verified scene material (if any): [SCENE_MATERIAL]
– Interviews/assets: [ASSETS]
– Skeptic/counterpoints: [SKEPTICS]
– Word count target: [WORD_COUNT]
Task: Build a beat sheet with 10–14 beats:
1) Opening: scene or “moment of friction” + bridge to nut graf.
2) Setup: what the reader thinks they know + what’s missing.
3) Reveal #1: a verified detail that changes understanding.
4) Mechanism: explain how it works (with one concrete example).
5) Stakes: who wins/loses and why it matters now.
6) Skeptic beat: best critique and what evidence tests it.
7) Reveal #2: a tradeoff or unintended consequence.
8) Systems beat: incentives/policy/supply chain behind it.
9) Human beat: impact with consent and privacy safeguards.
10) What’s next: indicators to watch (no predictions).
11) Ending: implication + limit/uncertainty acknowledgement.
For each beat: write a 1–2 sentence “beat paragraph,” list required evidence, and place it in an outline with word budget.
Output: 10–14 beat sheet + evidence requirements + outline with word budget.
157. Headline Rationale + Editor Options Pack
Deliver headline options with reasons (editors love this).
Category: Headlines
Inputs:
– Reader promise: [READER_PROMISE]
– Angle: [ANGLE]
– Top verified detail(s): [VERIFIED_DETAILS]
– Keywords to include (optional): [KEYWORDS]
– Words to avoid: [AVOID_WORDS]
Task: Create an editor‑friendly options pack:
1) 18 headlines in 3 buckets (clarity, curiosity, contradiction). Mark the top 5 safest picks.
2) For each top 5, give a rationale: what it promises, who it’s for, why it’s not clickbait, and what section of the article fulfills the promise.
3) Write 8 decks/subheads (1–2 sentences) that add specificity and stakes without hype.
4) Provide 10 alternative opening lines that pair well with the best headline (so the page feels cohesive).
5) Add a “truth test”: list 8 headline failure modes (overpromising, vague, too long, too technical, too moralizing) and check the top 5 against them.
Output: 18 headlines + top 5 rationale + decks + opening lines + truth test results.
158. AI Research Guardrails (Search, Summaries, Hallucinations)
Use AI to assist research without importing errors.
Category: Ethics
Inputs:
– What AI tools I’ll use (optional): [TOOLS]
– My topic and claims: [TOPIC_AND_CLAIMS]
– Risk level (health/legal/allegations): [RISK_LEVEL]
– Sources I already have: [SOURCES_HAVE]
Task: Create an AI‑safe research workflow:
1) Allowed vs forbidden uses, with examples (brainstorm vs fabricate). Include a “no citation without URL/title/date” rule.
2) A “source‑first” process: start with primary docs, then reputable secondary reporting, then expert interviews. Show how AI supports each step without replacing it.
3) A note‑taking template: claim → evidence link → quote/snippet → confidence → next verification action.
4) A hallucination detection checklist: red flags in AI output and what to do when you see them (re‑prompt with sources, remove claim, verify separately).
5) Three prompts I can reuse: (A) generate search queries, (B) summarize a source I paste, (C) turn notes into an outline with evidence placeholders.
6) A disclosure guidance block: what I should tell the editor and optional reader disclosure lines if required.
Output: Allowed/forbidden list + source‑first workflow + notes template + detection checklist + 3 reusable prompts + disclosure guidance.
159. Accessibility Pass for Visuals + UI Descriptions
Make visuals and UI references usable for everyone.
Category: Data & Visuals
Inputs:
– Visual list (paste): [VISUALS]
– Draft captions (paste): [CAPTIONS]
– Key takeaway per visual: [TAKEAWAYS]
– Sensitive data present (yes/no): [SENSITIVE_DATA]
Task: Produce an accessibility pack:
1) For each visual, write: (a) short alt text (1–2 sentences), (b) long description (3–6 sentences) focusing on the key takeaway, and (c) a caption that names what the reader should notice.
2) UI description templates: how to describe settings paths and interface elements in text (“Settings → Privacy → …”) without relying on “click the blue button.”
3) A redaction/privacy checklist for screenshots: blur names, IDs, faces, locations, private dashboards; keep evidence notes for editors.
4) A chart integrity checklist: axes, units, baselines, color‑blind friendly labels, and not hiding uncertainty.
5) A final “visuals QA” routine before submission (10 minutes).
Output: Alt text + long descriptions + improved captions + UI templates + privacy checklist + QA routine.
160. Corrections & Update Policy (After Publish)
Handle errors like a professional, not a panic spiral.
Category: Career
Inputs:
– What was flagged (paste): [FLAGGED_ISSUE]
– Evidence I have: [EVIDENCE]
– Who flagged it (reader/source/editor): [WHO]
– Publication correction norms (if known): [POLICY_NOTES]
Task: Create a corrections toolkit:
1) Decision tree: typo vs factual error vs interpretation dispute vs allegation; what action each requires.
2) Verification steps: how to re‑check sources, contact relevant parties, and document findings.
3) Draft messages: (A) email to editor, (B) reply to source/reader, (C) short social response if needed. Keep tone neutral and concise.
4) Correction note templates: transparent but minimal; explain what changed, not a long essay.
5) A “prevent next time” checklist: where the process failed (numbers, names, dates, attribution) and what to add to my workflow.
Output: Decision tree + verification steps + draft messages + correction templates + prevention checklist.
161. Editor Collaboration Plan (Clean Handoffs)
Make it easy for editors to say yes and edit fast.
Category: Career
Inputs:
– Editor expectations (if known): [EXPECTATIONS]
– My constraints: [CONSTRAINTS]
– Story type: [STORY_TYPE] (feature/service/culture/business)
– Assets I’ll provide: [ASSETS] (link pack, claim table, images)
Task: Create an editor‑friendly workflow:
1) Stage deliverables checklist: what I send at pitch, at first draft, at revision, at final (include word count, outline, link pack, claim table, interview log, visuals list).
2) A placeholder system: how I mark unknowns ([ADD_SOURCE], [VERIFY], [NEED_QUOTE]) so the editor can scan quickly.
3) A revision protocol: how to respond to edits, track changes, confirm factual edits, and ask smart questions without defensiveness.
4) A “risk note” template: how to flag sensitive claims and what corroboration exists.
5) A one‑page cover note template for every draft: what changed since last version, where the proof is, and what I need from the editor.
Output: Stage checklists + placeholder system + revision protocol + risk note template + cover note template.
162. Rate & Rights Negotiation Email Builder
Negotiate politely: rate, kill fee, rights, and payment timing.
Category: Career
Inputs:
– Offer details (paste): [OFFER]
– My target rate: [TARGET_RATE]
– Minimum acceptable: [MIN_RATE]
– Rights mentioned: [RIGHTS_MENTIONED]
– Payment terms mentioned: [PAYMENT_TERMS]
Task: Create a negotiation pack:
1) A checklist of the 12 key terms to clarify (rate, kill fee, rights scope, exclusivity window, edit process, fact‑check expectations, payment schedule, invoice method, expenses, byline, syndication).
2) Two email drafts: (A) gentle ask for improvement, (B) firm but friendly negotiation. Each should be 140–190 words and include a clear next step.
3) A “trade” menu: 8 alternatives if rate can’t move (shorter piece, faster turnaround fee, rights limitation, higher kill fee, expense coverage, guaranteed future assignment).
4) A one‑sentence closer that protects the relationship no matter what they decide.
Output: Terms checklist + 2 negotiation emails + trade menu + relationship‑safe closer.
163. Re‑Pitch Protocol (Same Idea, New Angle)
Recycle ethically: new outlet, new proof, new framing.
Category: Pitch Writing
Inputs:
– Original pitch (paste): [ORIGINAL_PITCH]
– Feedback (if any): [FEEDBACK]
– New target outlet: [NEW_OUTLET]
– New reporting assets since then: [NEW_ASSETS]
– My timeline: [TIMELINE]
Task: Create a re‑pitch kit:
1) Diagnose why the first pitch failed (fit, timing, novelty, proof, scope). Provide 3 fixes for each probable reason.
2) Generate 5 new angles from the same core idea, each with a new reader promise and a new “why now.” For each, include a mini‑outline with evidence placeholders.
3) Recommend the best angle for [NEW_OUTLET] and explain fit (section type, audience intent).
4) Write a fresh pitch email (190–230 words) that does not mention the prior rejection, unless necessary. Include a clean subject line and honest access statement.
5) Provide a re‑pitch etiquette checklist: timing, exclusivity, when to disclose prior pitching, and how to avoid simultaneous conflict.
Output: Failure diagnosis + 5 new angles + best‑fit rationale + new pitch email + etiquette checklist.
164. Community Sourcing Ethics (Calls for Stories)
Run an ethical call for sources without exploiting people.
Category: Ethics
Inputs:
– Where I want to post: [COMMUNITIES]
– Sensitivity level: [SENSITIVITY]
– What I’m trying to learn: [QUESTIONS]
– What I can offer (anonymity, time): [OFFER]
– Deadline: [DEADLINE]
Task: Create a community sourcing kit:
1) A call‑for‑stories post (150–220 words) with transparent intent, consent expectations, and contact methods. Include a privacy warning and an anonymity option.
2) A short DM/script for respondents with consent language and attribution choices (on‑record/background/anonymous).
3) A screening questionnaire (8–12 questions) that verifies relevance and avoids collecting sensitive personal data unnecessarily.
4) A safety protocol: how to handle minors, illegal activity, medical claims, harassment, and retaliation risk.
5) A verification plan: how to corroborate stories responsibly (documents, logs, second sources) and how to write when verification is limited.
Output: Call post + DM script + screening form + safety protocol + verification plan.
165. Privacy‑Safe Data Handling (Screenshots, Logs, DMs)
Protect people while keeping evidence for fact-checking.
Category: Ethics
Inputs:
– Artifact types I have: [ARTIFACT_TYPES]
– Sensitive data risks: [RISKS]
– Who could be harmed if exposed: [AFFECTED]
– Where I store files now: [CURRENT_STORAGE]
Task: Create a privacy‑safe evidence protocol:
1) A classification system: public / internal‑editor‑only / do‑not‑store. Give examples for each.
2) A file hygiene plan: naming rules, versioning, access control, backups, and deletion schedule. Include redaction workflow steps for screenshots and PDFs.
3) A consent checklist: what to get permission for (quotes, screenshots, names), and what you should never promise (quote approval) while staying ethical.
4) A publishing checklist: what must be blurred/redacted, how to anonymize without making people identifiable by unique details, and how to cite artifacts responsibly.
5) A short “editor note” template describing what artifacts exist and how they were handled, without exposing private data.
Output: Classification system + hygiene plan + consent checklist + publishing checklist + editor note template.
166. Final Submission Gate (Editor‑Ready Checklist)
One last pass before you hit send.
Category: Editing
Inputs:
– Draft or outline (paste): [DRAFT]
– Source list/link pack (paste): [SOURCES]
– Visuals list (if any): [VISUALS]
– Sensitive claims (if any): [SENSITIVE]
Task: Run a final submission gate with these outputs:
1) “Must fix” list (top 10): the most important issues blocking publication (proof gaps, unclear thesis, missing right‑of‑reply, overclaiming).
2) “Nice to improve” list (10 items): clarity, pacing, style, scannability, subheads, transitions.
3) A compliance check: attribution, quotes, numbers, dates, names, and ethics (COI, privacy, anonymity rules).
4) A packaging check: headline/deck match the content; add 3 safer headline options if needed.
5) A submission bundle checklist: what files/notes to attach (cover note, link pack, claim table, interview log, captions/alt text). Provide a short cover note draft (90–140 words).
Output: Must‑fix + nice‑to‑improve + compliance check + packaging check + submission bundle checklist + cover note draft.
167. Clip Bundle Builder (3 Clips → One Strong Pitch)
Package your best work to look experienced fast.
Category: Career
Inputs:
– My clips (link + summary): [CLIPS]
– Target topic/beat: [TARGET_BEAT]
– Target outlet: [PUBLICATION_NAME]
– Skills I want to highlight: [SKILLS]
– Any sensitive constraints (NDA, anonymity): [CONSTRAINTS]
Task: Create a clip bundle pack:
1) Pick the best 3 clips/samples and explain why each supports this beat (reporting, clarity, data, ethics, voice).
2) Write a 90–120 word “portfolio proof” paragraph that summarizes my strengths with concrete verbs and no fluff. Include the three links as placeholders [CLIP_LINK_1], [CLIP_LINK_2], [CLIP_LINK_3].
3) Write 6 one‑line credibility bullets I can swap into pitches (e.g., “I reported using filings/interviews…,” “I built a link pack…”). Keep them honest and specific.
4) Provide a mini “credibility plan” for my next 30 days: what one sample I should create next to strengthen this beat (service guide, reported mini‑feature, data explainer).
Output: Best 3 picks + portfolio paragraph + credibility bullets + 30‑day plan.
168. Evergreen Refresh Plan (Update Without Rewriting)
Keep an older story accurate and valuable over time.
Category: Editing
Inputs:
– Article text or URL: [ARTICLE]
– Original publish date: [PUBLISH_DATE]
– Sections most likely to go stale: [STALE_SECTIONS]
– Key sources used originally: [ORIGINAL_SOURCES]
Task: Build an evergreen refresh kit:
1) Staleness audit: list the top 15 items to re‑verify (numbers, prices, policies, product names, links, claims).
2) Update workflow: how to find the latest primary sources, how to document changes, and how to avoid “silent rewrites.”
3) Safe update language: templates for “updated as of [DATE],” “the company now says…,” “the policy has since changed…” without overclaiming.
4) Link hygiene: how to replace dead links, add archived links, and keep a “retrieved on” record.
5) A short “update note” template for editors, and an optional reader note if required.
6) A light monthly/quarterly cadence: which parts to check each cycle so updates take [MINUTES] minutes, not hours.
Output: Staleness audit + update workflow + safe language templates + link hygiene plan + update notes + maintenance cadence.
169. Idea → Angle Funnel (Broad to Sharp)
Turn a vague topic into a publishable, reportable angle.
Category: Research
Inputs:
– Broad topic: [BROAD_TOPIC]
– What interests me most: [PERSONAL_CURIOSITY]
– What I can access (people/products/docs/data): [ACCESS]
– Audience: [AUDIENCE]
– Scope (region/time): [SCOPE]
– Risk/sensitivity: [RISK_LEVEL]
Task: Build an “angle funnel” in 5 steps:
1) Generate 12 specific sub‑angles (not categories) that are each testable or reportable (include one skeptic angle and one “hidden incentives” angle).
2) For each sub‑angle, write: core question, what’s new/under‑covered, and the minimum evidence required (1 doc + 1 interview type + 1 data/proof unit).
3) Choose the best 3 angles for [PUBLICATION_NAME] and explain the fit (section vibe, reader intent, novelty).
4) For the top angle, create a 7‑section outline with a word budget and evidence placeholders per section ([ADD_QUOTE]/[ADD_DATA]/[ADD_DOC]).
5) Write a one‑sentence reader promise and a “why now” that does not rely on breaking news. End with 8 reporting actions I should do next, ranked by impact.
Output: 12 sub‑angles + top 3 picks + final outline + reader promise + ranked reporting actions.
170. Coverage Gap Scan (What Others Missed)
Find a fresh hook by mapping existing coverage and its blind spots.
Category: Research
Inputs:
– Topic: [TOPIC]
– Existing coverage list (paste): [COVERAGE_LIST]
– Target outlet: [PUBLICATION_NAME]
– My access advantage: [ACCESS_ADVANTAGE]
– Desired story format: [FORMAT] (feature/service/business/culture)
Task: Create a gap scan pack:
1) Coverage map: cluster the existing pieces into 4–7 story frames (e.g., hype, fear, founder profile, product review, policy panic). Name each frame neutrally.
2) Gap list: identify at least 12 gaps across (a) stakeholders missing, (b) mechanism under‑explained, (c) data missing, (d) incentives not explored, (e) timeline confusion, (f) consequences ignored, (g) geographic bias, (h) claims repeated without proof.
3) For the top 5 gaps, propose a reportable angle: hook + core question + what proof I need + who to interview + one document/dataset target. Use placeholders.
4) Produce 2 “WIRED‑fit” pitches (190–230 words) based on the best 2 gaps, including subject lines and a mini‑outline with evidence placeholders.
5) Add a “don’t repeat” checklist: 10 clichés and overused claims to avoid for this topic, and how to replace them with specific reporting moves.
Output: Coverage map + 12 gaps + 5 gap‑angles + 2 completed pitches + don’t‑repeat checklist.
171. Interview Request + Consent Script (Beginner‑Safe)
Get sources ethically with clear attribution options.
Category: Ethics
Inputs:
– Who I’m contacting (role/type): [SOURCE_ROLE]
– How I found them: [HOW_FOUND]
– Sensitivity level: [SENSITIVITY]
– My deadline: [DEADLINE]
– Recording plan (yes/no/optional): [RECORDING]
Task: Build an interview outreach kit:
1) Write 3 outreach emails (90–130 words each): (A) expert, (B) affected person, (C) company/PR. Each should include: why I’m reaching out, what I’m working on, time ask, and consent/attribution clarity. Use placeholders like [TIME_WINDOW] and [TOPIC_QUESTION].
2) Write a short pre‑call consent script (spoken) that covers: on‑the‑record vs background vs anonymous, recording permission, and how quotes may be edited for clarity without changing meaning.
3) Create a “vulnerable source” safety protocol: when to offer anonymity, how to avoid doxxing via job details, how to handle minors (generally avoid), and how to pause if they seem distressed.
4) Provide follow‑up templates: thank‑you note, request for document verification, and a right‑of‑reply notice if I’m including criticism.
5) End with a checklist: what I must log after each interview (date/time, consent status, key claims, proof needed, and next actions).
Output: 3 outreach emails + consent script + safety protocol + follow‑ups + interview logging checklist.
172. Interview Question Builder (Mechanism + Tradeoffs)
Ask questions that produce evidence, not marketing lines.
Category: Sourcing
Inputs:
– Source type: [SOURCE_TYPE]
– My angle/thesis: [ANGLE]
– What I must verify: [VERIFICATION_GOALS]
– Sensitive areas to handle carefully: [SENSITIVITIES]
– Time for interview: [DURATION] minutes
Task: Create a question pack with structure:
1) 8 opener questions that build rapport and establish credentials/how they know what they know.
2) 18 core questions divided into: mechanism (“how it works”), measurement (“how you know”), tradeoffs (“what it breaks”), and incentives (“who benefits”).
3) 12 “pressure‑test” follow‑ups that politely challenge vague claims (“Can you name a number?”, “What would change your mind?”, “Show me the baseline?”).
4) 6 questions that invite skepticism: ask them to name the best criticism and to identify unknowns/uncertainties.
5) A mini “quote capture” guide: prompts that yield vivid, concrete quotes without leading the witness (e.g., “Describe the moment you noticed…”).
6) A post‑interview proof checklist: what documents/logs/screenshots I should request to corroborate claims (use [REQUEST_DOC] placeholders).
Output: Openers + core questions + follow‑ups + skeptic questions + quote capture prompts + proof request checklist.
173. Timeline Builder (Dates, Versions, Milestones)
Make chronology accurate so your story doesn’t wobble.
Category: Fact-Checking
Inputs:
– Raw timeline notes (paste): [TIMELINE_NOTES]
– Source links/docs (paste): [SOURCES]
– Entities involved: [ENTITIES]
– Scope (start/end): [TIME_RANGE]
Task: Build a timeline pack:
1) Convert notes into a table: date → event → what changed → evidence link → confidence (high/med/low).
2) Identify contradictions or unclear sequences and list the exact proof needed to resolve each conflict (2+ methods per conflict).
3) Produce a “reader timeline” version: 8–12 bullets that are easy to follow, with neutral wording and no inference of intent.
4) Suggest where to place the timeline in my article (sidebar, mid‑story, or appendix) and write a 90–130 word “timeline narration” paragraph.
5) Provide a checklist for timeline claims: version numbers, time zones, and “as of” phrasing to prevent staleness.
Output: Evidence‑linked timeline table + conflict list + reader timeline + narration paragraph + timeline checklist.
174. Mechanism Explainer Builder (How It Works)
Explain the ‘how’ with one concrete example and honest limits.
Category: Writing Craft
Inputs:
– Rough mechanism notes (paste): [MECHANISM_NOTES]
– One concrete example I can use (real): [EXAMPLE]
– Reader baseline: [BASELINE]
– Terms I must define: [TERMS]
– Known limits/unknowns: [LIMITS]
Task: Create a mechanism explainer pack:
1) Rewrite the explanation in three layers: (A) 60–80 word “quick clarity,” (B) 150–190 word “mechanism paragraph,” (C) 250–330 word “deep but readable” version with one concrete example and at least 2 caveat sentences.
2) Produce a glossary box: 8–12 key terms with one‑line definitions and “common misunderstanding.” Add [CITE_SOURCE] placeholders where needed.
3) Provide 6 analogies that are safe (what they match and what they don’t), and tell me which 2 are best for this audience.
4) Add a “failure modes” mini‑section: 5 ways the mechanism breaks or produces unintended outcomes (no speculation—use [NEED_EVIDENCE] if not verified).
5) End with 10 sentence templates to hedge correctly and avoid overclaiming.
Output: 3-layer rewrite + glossary + analogy options + failure modes + safe hedge templates.
175. Skeptic Lane Builder (Best Critique Upfront)
Add smart skepticism without turning the piece into ‘both sides.’
Category: Structure
Inputs:
– Thesis/angle: [ANGLE]
– Evidence I have: [EVIDENCE_HAVE]
– Criticisms I’ve heard (paste): [CRITICISMS]
– Who could be harmed/benefit: [STAKEHOLDERS]
– Risk level: [RISK_LEVEL]
Task: Build a skeptic lane kit:
1) Identify the 8 strongest skeptical questions an informed reader would ask. Pair each question with the best evidence type to answer it (data, audit, document, independent expert, hands‑on test).
2) Create a “skeptic segment” outline: where it belongs, what it should accomplish, and what it should not do (avoid straw men).
3) Write 2 versions of the skeptic segment (each 220–300 words): (A) skeptical‑but‑curious, (B) tougher “cross‑exam” tone—both must be fair and evidence‑based. Use placeholders [ADD_SOURCE] as needed.
4) Provide a “false balance detector”: 10 signals that I’m giving undue weight to weak objections and how to fix it.
5) Provide transition lines into and out of the skeptic lane so the piece stays cohesive.
Output: Skeptic questions + segment outline + 2 skeptic segments + false balance detector + transitions.
176. Case Study Selector (Representative, Not Extreme)
Pick examples that illuminate the system without cherry‑picking.
Category: Ethics
Inputs:
– Candidate examples (paste list): [EXAMPLES]
– What each example can prove: [WHAT_IT_PROVES]
– Verification assets (docs/logs/quotes): [EVIDENCE]
– Consent/privacy constraints: [CONSENT_PRIVACY]
– Audience goal: [AUDIENCE_GOAL]
Task: Create an example selection pack:
1) Build a scoring rubric with at least 10 criteria: representativeness, verifiability, relevance to thesis, diversity of perspective, privacy risk, harm risk, novelty, clarity, and “system illumination.”
2) Score each candidate example using the rubric and show me the table. If info is missing, mark it [MISSING] and tell me what to gather.
3) Recommend the best 2 examples and explain why they are fair and useful, plus how to contextualize them to avoid overgeneralizing.
4) Write 2 mini‑case sections (220–280 words each) using placeholders for names and details that must be verified, including ethical context and a clear link to the mechanism of the story.
5) Provide an “example honesty” checklist: how to avoid implying the case is universal, how to attribute, and how to state limits.
Output: Rubric + scored table + best picks + 2 mini‑case sections + honesty checklist.
177. Analogy Builder (Explain Without Lying)
Create analogies that illuminate—and note where they break.
Category: Style
Inputs:
– Concept to explain: [CONCEPT]
– Audience baseline: [AUDIENCE_LEVEL]
– Key constraints (what must stay true): [CONSTRAINTS]
– Common misconceptions to avoid: [MISCONCEPTIONS]
Task: Build an analogy pack:
1) Generate 12 analogies across different domains (everyday life, biology, cities, music, logistics, sports, cooking). For each: what it matches, what it does not match, and the “dangerous misread” risk.
2) Select the best 3 analogies for [AUDIENCE_LEVEL] and write 3 short explanation paragraphs (200–260 words each) that use the analogy, then explicitly state the limits (“The analogy breaks when…”). Include one concrete example placeholder [EXAMPLE] and one caveat sentence per paragraph.
3) Create a “precision phrasing” list: 15 phrases that keep analogies honest (“roughly,” “in one sense,” “not exactly,” “think of it like…”).
4) Provide a final “analogy test”: 8 questions to check whether the analogy could mislead readers or oversimplify important tradeoffs.
Output: 12 analogies + 3 chosen paragraphs + precision phrasing list + analogy test checklist.
178. Claim Table Generator (Fact‑Check Ready)
Turn your draft into a claim-by-claim proof checklist.
Category: Fact-Checking
Inputs:
– Draft (paste): [DRAFT]
– Source list (paste): [SOURCES]
– High‑risk claims: [HIGH_RISK]
Task: Produce a claim table pack:
1) Extract 25–60 claims and present a table: claim → type (fact/number/quote/interpretation) → source currently supporting it → strength (high/med/low) → missing proof → action (add citation, rephrase, remove).
2) Highlight the top 10 “publication blockers” (claims most likely to trigger editor pushback). Provide safer rewrites for each and specific proof targets.
3) Provide an attribution map: where “studies show” or “experts say” is too vague, and how to attribute precisely without clutter.
4) Provide a “numbers audit”: check for denominators, time ranges, and comparison baselines. Flag any number that lacks context with [ADD_CONTEXT].
5) End with a 15‑point final fact‑check checklist I can run before submission.
Output: Claim table + top blockers + rewrites + attribution map + numbers audit + final checklist.
179. Legal/Ethical Risk Map (Defamation, Privacy, IP)
Identify risky zones and rewrite before editors ask.
Category: Ethics
Inputs:
– Draft excerpt (paste): [DRAFT_EXCERPT]
– Entities mentioned: [ENTITIES]
– Evidence I have: [EVIDENCE_HAVE]
– Artifacts (screenshots/logs/emails): [ARTIFACTS]
– Right‑of‑reply status: [RIGHT_OF_REPLY]
Task: Create a risk map pack:
1) Identify risky passages and label the risk type: defamation/allegation, privacy/doxxing, minors, medical claims, IP/trade secrets, security. Explain why each is risky in plain English.
2) Provide 2 rewrites per risky passage: (A) safer neutral wording, (B) attribution‑forward wording. Preserve meaning but remove overclaiming and implied intent.
3) List the minimum proof needed to keep the strongest version (filing, audit, court doc, recorded interview, public statement). Use [NEED_PROOF] placeholders.
4) Provide a redaction plan for artifacts: what to blur, how to refer to material without publishing sensitive details, and how to store it for editor review.
5) End with a “risk‑aware submission note” template (80–130 words) to send privately to the editor summarizing sensitive areas and corroboration.
Output: Risk labels + rewrites + proof requirements + redaction plan + editor risk note template.
180. Plagiarism & Patchwriting Detector (Clean Originality)
Rewrite so you’re not accidentally echoing sources.
Category: Ethics
Inputs:
– My draft excerpt (paste): [MY_DRAFT]
– Source excerpts that influenced it (paste): [SOURCE_EXCERPTS]
– What facts I must keep: [MUST_KEEP_FACTS]
– Citation style: [STYLE]
Task: Produce an originality repair pack:
1) Identify patchwriting risk areas: similar sentence order, unique phrases, repeated metaphors, or mirrored structure. Explain each risk plainly.
2) For each risky passage, provide 2 clean rewrites: (A) paraphrase with attribution, (B) rewrite that reframes the idea using my own logic or reported detail. Use [ADD_SOURCE] or [ADD_QUOTE] markers when appropriate.
3) Provide a “facts vs phrasing” checklist: what must be cited, what can be common knowledge, and how to keep unique claims attributed.
4) Provide a 10‑step anti‑patchwriting workflow: read → close source → write from notes → verify → cite → re‑read → simplify → check uniqueness → add original reporting → final scan.
5) End with a short “ethics note” I can keep in my workflow about respecting sources while writing distinctly.
Output: Risk diagnosis + rewrites + facts vs phrasing checklist + 10‑step workflow + ethics note.
181. Voice Consistency Pass (Crisp Magazine Tone)
Unify tone: smart, precise, and readable.
Category: Style
Inputs:
– Draft excerpt (paste): [DRAFT_EXCERPT]
– Target vibe (choose): [VIBE] (curious, skeptical, humane, analytical)
– Words I overuse: [OVERUSED_WORDS]
– Terms I must keep: [MUST_KEEP_TERMS]
Task: Create a voice consistency pack:
1) Diagnose tone problems: where it becomes vague, salesy, moralizing, or too technical. Give examples from my text (short excerpts).
2) Rewrite the excerpt with a consistent voice. Keep facts intact, reduce filler, and break up long sentences. Use placeholders for missing facts instead of fabricating.
3) Provide a “style dial” guide: 12 rules I can apply (verb‑first sentences, avoid passive unless needed, no “revolutionary,” define acronyms once, 1 idea per sentence when possible).
4) Give me a personal “banned buzzwords” list and suggested replacements based on my excerpt.
5) End with a mini before/after table: 10 lines showing original → improved → why it’s better.
Output: Tone diagnosis + rewritten excerpt + style rules + replacement list + before/after table.
182. Subhead + Scannability Pack (Mobile‑First)
Make the article easy to skim without dumbing it down.
Category: Editing
Inputs:
– Outline or draft (paste): [DRAFT_OR_OUTLINE]
– Reader promise: [READER_PROMISE]
– Preferred tone: [TONE]
– Word count: [WORD_COUNT]
Task: Produce a scannability pack:
1) Write 10–14 subhead options that follow a coherent arc (setup → mechanism → stakes → skeptic → what next). Each subhead should be specific and not clickbait.
2) For each section, add a one‑sentence “why this section exists” value line that keeps me focused while drafting.
3) Identify 6 places for callouts: “Key term,” “What surprised me,” “The tradeoff,” “The data,” “The skeptic,” “What to watch.” Provide templates for each callout with placeholders [INSERT].
4) Provide paragraph surgery recommendations: where to split, where to combine, where to move a sentence to improve pacing.
5) Write a “mobile skim test”: 12 questions a skimmer should be able to answer just from headline + deck + subheads + callouts. Map each question to the section that answers it.
Output: Subheads + value lines + callout templates + paragraph surgery notes + mobile skim test map.
183. Pull Quotes + Sidebars Pack (Extra Value)
Add magazine-style extras that make editors happy.
Category: Data & Visuals
Inputs:
– Draft excerpt (paste): [DRAFT_EXCERPT]
– Available quotes (paste): [QUOTES]
– Available data points (paste): [DATA_POINTS]
– Visual assets possible (screenshots/diagrams): [ASSETS]
Task: Create an extras pack:
1) Select 8 pull quotes from my quotes (or mark where I need better quotes). For each: why it works, where to place it, and a context line to avoid misleading readers.
2) Propose 6 sidebars/callouts with titles, purpose, and 4–7 bullet contents each. At least one should be “Myth vs reality,” one “Timeline,” and one “How it works in 5 steps.” Use placeholders where needed.
3) Provide a mini “visual suggestion” for each sidebar (chart, diagram, screenshot) and write caption + alt text templates.
4) Give an editor note: how these extras improve comprehension and keep the main narrative clean.
Output: Pull quote set + 6 sidebars + visual suggestions + captions/alt text + editor note.
184. Assignment Kickoff Pack (Timeline + Deliverables)
When an editor says yes, respond like a pro.
Category: Career
Inputs:
– Assignment topic/angle: [ANGLE]
– Requested length: [WORD_COUNT]
– Deadline(s): [DEADLINES]
– Visual needs (if any): [VISUALS]
– Known editorial constraints: [CONSTRAINTS]
Task: Build a kickoff pack:
1) Draft a kickoff email (160–210 words) confirming: scope, deadline, draft milestones, and what I will deliver (link pack, claim table, captions/alt text). Include 6 smart questions for the editor (rights, fact‑check, legal review, sensitivity, formatting).
2) Create a 10‑day plan (or customize to [DEADLINES]): daily 60–120 minute blocks with tasks (reporting, outline, draft, revision, proof).
3) Provide a deliverables checklist: what files and notes I send at first draft and at final, with naming rules and placeholders like [FILE_NAME].
4) Provide a “risk flag” note template for sensitive claims, emphasizing corroboration and right‑of‑reply.
Output: Kickoff email + milestone plan + deliverables checklist + risk flag note template.
185. Reporting Budget Plan (Time/Cost/Tools)
Scope your reporting realistically so you finish on time.
Category: Career
Inputs:
– Deadline: [DEADLINE]
– Time available per day: [TIME_PER_DAY]
– Expected story length: [WORD_COUNT]
– Access (sources/assets): [ACCESS]
– Hard constraints (travel budget, tools): [CONSTRAINTS]
Task: Create a reporting budget pack:
1) Recommend a “minimum viable reporting plan” (MVRP): number of interviews by type, minimum document/data targets, and one hands‑on verification step if relevant.
2) Create a time budget table: reporting hours, writing hours, revision hours, fact‑check hours—plus buffer. Map tasks to days.
3) List likely costs (transcription, travel, tools, data access) with placeholders [COST] and how to minimize each cost ethically.
4) Provide a scope‑cut menu: if I lose 30% of time, what can be cut without harming truth (and what must not be cut).
5) Provide a “proof first” priority list: which evidence to secure earliest to de‑risk the piece.
Output: MVRP + time budget table + cost list + scope‑cut menu + proof‑first priority list.
186. Compression Edit (Cut 20% Without Losing Meaning)
Shorten cleanly: remove fluff, keep proof and rhythm.
Category: Editing
Inputs:
– Draft (paste): [DRAFT]
– Target word count: [TARGET_WORD_COUNT]
– Must‑keep elements: [MUST_KEEP] (scene, key quote, data, skeptic)
Task: Produce a compression kit:
1) Identify the “spine” sentences: the minimum set of paragraphs that carry the thesis and proof. Mark what’s optional.
2) Provide a cut plan: (A) easiest trims (adverbs, redundancy), (B) structural trims (merge sections), (C) deep trims (remove entire beat). Estimate word savings per cut type.
3) Rewrite the draft into a shorter version (or show section-by-section edits) preserving: clear thesis, proof, skeptic lane, and honest uncertainty. Use [ADD_SOURCE] placeholders when a sentence becomes too compressed to remain defensible.
4) Provide a “no‑damage checklist”: 12 items to confirm after cutting (logic, attribution, timeline, numbers context, fairness).
Output: Spine map + cut plan + compressed rewrite + no‑damage checklist.
187. Reader Trust Signals Pack (Methodology + Disclosures)
Add transparency that boosts credibility without killing flow.
Category: Ethics
Inputs:
– What I did (reporting/testing): [METHODS]
– What I did not do: [LIMITS]
– Access/freebies/relationships: [COI]
– Any anonymous sources: [ANON_SOURCES]
– Publication norms (if known): [POLICY]
Task: Build a trust signals pack:
1) Write 2 short methodology blurbs (40–70 words) that can appear as a sidebar or end note.
2) Provide 10 “trust micro‑lines” I can place inside the narrative (e.g., “In my tests on [DATE]…,” “In documents reviewed by…”), with placeholders and guidance on when each is appropriate.
3) Create disclosure language options: 5 one‑sentence disclosures + 2 longer (60–90 word) disclosures, neutral tone, no defensiveness.
4) Provide an editor‑only note template that lists corroboration, sensitive claims, and right‑of‑reply attempts.
5) End with a “trust checklist”: 15 items that increase credibility without adding fluff.
Output: Method blurbs + micro‑lines + disclosure options + editor note + trust checklist.
188. Ending Builder (Honest Close, No Hype)
Write an ending that lands the point and respects uncertainty.
Category: Writing Craft
Inputs:
– Thesis: [THESIS]
– Key evidence beats: [EVIDENCE_BEATS]
– Skeptic/counterpoint: [SKEPTIC]
– Human impact (if any): [IMPACT]
– Tone preference: [TONE]
Task: Generate 4 ending options (each 220–300 words):
1) “Tradeoff ending” (name what’s gained vs lost).
2) “Indicator ending” (3–5 signals to watch, no prophecy).
3) “Return to scene” ending (only if I have a real opening scene; otherwise use [SCENE_PLACEHOLDER]).
4) “Question‑answered ending” (explicitly answer the core question and state limits).
After the 4 endings, provide 10 final lines (one sentence each) that feel WIRED‑ish: crisp, non‑moralizing, non‑hype. Also provide a “bad ending detector” checklist (12 items) to help me avoid overclaiming or preaching.
Output: 4 endings + 10 final lines + bad ending detector checklist.
189. Multi‑Format Repurpose (Web/Newsletter/Audio Outline)
Reuse your reporting across formats without rewriting from scratch.
Category: Structure
Inputs:
– Original outline or draft (paste): [DRAFT_OR_OUTLINE]
– Key sources/quotes/data (paste): [EVIDENCE]
– Target newsletter length: [NEWSLETTER_WORDS]
– Audio length (minutes): [AUDIO_MINUTES]
Task: Create a repurpose pack:
1) Web feature: confirm the spine (7–10 sections) and list which evidence belongs in each.
2) Newsletter version: write a newsletter structure (hook → 3 beats → skeptic → takeaway) and draft the full newsletter (350–650 words) with placeholders where needed.
3) Audio outline: a 6–10 minute script outline with timestamps, host lines, “clip moments,” and questions for a guest (if applicable). Include notes on what needs attribution on air.
4) Provide a “format integrity checklist”: how to keep claims consistent across formats and avoid drift.
Output: Web spine + drafted newsletter + audio outline with timestamps + integrity checklist.
190. Hands‑On Testing Protocol (App/Product/Service)
Design a fair, repeatable test so your review isn’t vibes.
Inputs:
– Thing to test: [THING_TO_TEST]
– Version/platform: [VERSION_PLATFORM]
– Intended story angle: [ANGLE]
– What the product claims: [PRODUCT_CLAIMS]
– My baseline/comparison: [BASELINE] (previous tool, competitor, manual method)
– Time + constraints: [TIME_CONSTRAINTS]
Task: Create a test protocol pack:
1) Define 5–8 test questions tied to my angle (speed, reliability, privacy, accuracy, UX friction, cost).
2) Write a step‑by‑step test plan (10–18 steps) with repeatability notes (same inputs, same conditions, same measurement).
3) Create a measurement table: metric → how to measure → tool needed → what “good” looks like → common pitfalls (e.g., caching, learning effects).
4) Provide a logging template I can paste into Notes/Sheets: test case, timestamp, setup, result, screenshot/log file, anomaly, interpretation limit.
5) Build a fairness checklist: disclose constraints, avoid cherry‑picking, test failures, include the skeptic lane (“What would change your mind?”).
6) Draft a 120–160 word methodology blurb for readers and an editor‑only evidence note (what artifacts exist, where stored).
Output: Test questions + protocol steps + measurement table + logging template + fairness checklist + methodology blurbs.
191. Data Story Starter (Dataset → Angle → Chart)
Turn a dataset into a cautious, explainable story angle.
Inputs:
– Dataset/source description: [DATASET_DESCRIPTION]
– Key columns: [COLUMNS]
– Time range + geography: [TIME_GEO]
– Question I want to answer: [QUESTION]
– Audience level: [AUDIENCE_LEVEL]
Task: Produce a dataset‑to‑story kit:
1) Assess dataset credibility: who collected it, how, known biases, missingness, definitions. Create a “data caveats” box with [CITE_SOURCE] placeholders.
2) Propose 8 analysis questions (not conclusions): distribution, trends, outliers, segmentation, before/after, correlations vs causation warnings.
3) Recommend 3 chart ideas that are truthful and readable. For each: what it shows, what it does NOT show, axes/units, and a suggested caption.
4) Provide a mini “cleaning checklist”: standardize dates, handle missing values, dedupe, avoid double counting, confirm denominators.
5) Draft a 200–260 word “data section” that uses placeholders [INSERT_NUMBER] and [INSERT_SOURCE] and includes at least 2 caveat sentences and one skeptic question.
6) Provide alt text + long description templates for each chart, and a short methodology blurb (“How we analyzed the data”).
Output: Credibility assessment + 8 analysis questions + 3 chart plans + cleaning checklist + drafted data section + accessibility templates.
192. Expert Vetting Pack (Who to Trust + Conflicts)
Choose experts responsibly and avoid authority theater.
Inputs:
– Candidate experts/orgs (paste): [CANDIDATES]
– What I need from them (explain/critique/verify): [NEED]
– Risk level (health/legal/allegations): [RISK_LEVEL]
– My timeline: [TIMELINE]
Task: Create an expert vetting pack:
1) Build a vetting rubric (10–14 criteria): relevant expertise, publication record, conflicts, history of corrections, independence, methodological rigor, communication clarity, and “known for what?”
2) For each candidate, produce a “questions to verify” list: funding, consulting, patents, board roles, public statements, and how they know their claim. Use placeholders like [CHECK_LINKEDIN] / [CHECK_PAPERS].
3) Provide a balanced sourcing map: 3 expert roles I should include (e.g., builder, regulator, skeptic) and why each matters.
4) Write 12 interview questions that surface conflicts politely and ask for evidence (not vibes). Include follow‑ups that ask for primary sources.
5) Draft 8 attribution templates that are transparent about expertise and limits (“X studies this,” “X consults for,” “X reviewed the data”).
6) End with a “don’t do this” list: 10 common beginner mistakes (anonymous “researchers,” outdated authorities, uncritical think‑tank quoting) and how to fix them.
Output: Vetting rubric + candidate checklists + sourcing map + interview questions + attribution templates + common mistakes list.
193. Press Release → Reportable Story (De‑PR‑ify)
Turn marketing claims into questions you can prove.
Inputs:
– Press release text (paste): [PRESS_RELEASE]
– My target angle/beat: [BEAT]
– What I can access (demo/interviews/docs): [ACCESS]
– Deadline: [DEADLINE]
Task: Create a PR‑to‑story kit:
1) Extract the top 12 claims and label them by type: technical, business, impact, security, “AI,” cost, performance, partnership.
2) For each claim, write (a) a neutral rewrite, (b) the verification question, (c) the minimum acceptable evidence, and (d) who to interview besides the company.
3) Provide a “context ladder”: what happened before this announcement, what similar attempts failed, and what the baseline is. Use [NEED_SOURCE] markers.
4) Design a hands‑on test or demo checklist with specific pass/fail criteria and logging steps (no results, just protocol).
5) Draft a 7‑section outline that includes a skeptic lane, a fairness/right‑of‑reply segment, and a clear “what we can say for sure” section.
6) Write a pitch email (190–230 words) to [PUBLICATION_NAME] that makes the story sound like journalism, not marketing, including an honest access statement.
Output: Claim extraction + verification questions + context ladder + test checklist + outline + final pitch email.
194. Myth‑vs‑Reality Builder (Evidence‑First)
Write a mythbusting piece that stays fair and factual.
Inputs:
– Myth/claim: [MYTH]
– Where it circulates: [WHERE_SEEN]
– What I know so far: [CURRENT_EVIDENCE]
– Audience: [AUDIENCE]
– Sensitivity: [SENSITIVITY]
Task: Build a mythbusting kit:
1) Break the myth into 5–9 sub‑claims (the pieces that can be verified separately).
2) For each sub‑claim: best evidence type, what would falsify it, likely confounders, and a safe wording option if evidence is mixed.
3) Provide an expert sourcing plan: 3 expert roles + 2 skeptic roles + 2 “affected user” roles, with what each can confirm.
4) Draft a 220–300 word “Myth vs Reality” section that includes: what people believe, why it feels true, what evidence says, and what remains unknown. Use [ADD_SOURCE] placeholders and include at least 2 caveat sentences.
5) Create a “tone guardrail” list: 10 lines to avoid (shaming, certainty) and 10 replacements that keep readers onboard.
6) End with a 7‑section outline for the full story with evidence placeholders and a fair right‑of‑reply plan if organizations are criticized.
Output: Sub‑claims map + evidence plan + sourcing plan + drafted section + tone guardrails + full outline.
195. Comparison Framework (Two Things, One Fair Test)
Compare two products/ideas without stacked criteria.
Inputs:
– Option A: [A]
– Option B: [B]
– Use case: [USE_CASE]
– Constraints (budget/time/tools): [CONSTRAINTS]
– Claims each side makes: [CLAIMS]
Task: Create a comparison kit:
1) Define 6–10 comparison criteria tied to the use case (cost, accuracy, privacy, maintenance, learning curve, reliability, ecosystem lock‑in). Explain why each criterion matters.
2) For each criterion, define: how to measure, what baseline is, what evidence counts, and where subjectivity may sneak in.
3) Create a test matrix: scenario rows × criteria columns with placeholders for results and notes. Include a “notes for fairness” column.
4) Draft a 7‑section outline that includes: lede, nut graf, criteria explanation, hands‑on method, findings (with caveats), skeptic lane, and “which one for whom.”
5) Provide a disclosure block template (free products, affiliate links, conflicts) and a “what I didn’t test” box.
6) Produce 2 pitch subject lines and a 190–230 word pitch for [PUBLICATION_NAME] emphasizing fairness and method.
Output: Criteria set + measurement definitions + test matrix + outline + disclosure templates + finished pitch.
196. Cost & Tradeoff Calculator (What Adoption Really Costs)
Show the hidden costs: time, switching, risk, maintenance.
Inputs:
– Change being considered: [CHANGE]
– Audience type: [AUDIENCE_TYPE] (student, freelancer, small business, enterprise)
– Constraints: [CONSTRAINTS]
– What claims exist: [CLAIMS]
Task: Build a cost/tradeoff pack:
1) List 12 cost buckets: one‑time costs, recurring costs, switching costs, training time, data migration, opportunity cost, downtime risk, compliance/privacy costs, vendor lock‑in, support burden, long‑term maintenance, and “failure recovery.”
2) Create a simple worksheet template: bucket → formula → inputs needed → notes → evidence source placeholder [SOURCE]. Include ranges (low/med/high) with assumptions.
3) Provide 3 reader personas and fill the worksheet with placeholders for each (so readers see how costs vary).
4) Propose 2 visual ideas: a stacked bar (cost buckets) and a tradeoff radar (privacy vs convenience etc.). Include caption + alt text templates.
5) Draft a 220–300 word section that explains: “Here’s what people forget to count,” with at least 2 caveat sentences and a skeptical check (“What if your assumption is wrong?”).
Output: 12 cost buckets + worksheet template + 3 persona worksheets + 2 visual plans + drafted section.
197. Skeptical Reader Rewrite (Tone + Proof Upgrade)
Rewrite for the toughest reasonable reader.
Inputs:
– Draft excerpt (paste): [DRAFT_EXCERPT]
– Thesis/angle: [ANGLE]
– Evidence I already have: [EVIDENCE_HAVE]
– Risk level: [RISK_LEVEL]
Task: Produce a skeptical rewrite pack:
1) Identify 12 “hype signals” in my draft (vague superlatives, unqualified claims, missing baselines) and show the exact lines.
2) Rewrite the excerpt in a skeptical‑but‑fair voice, keeping it readable. Add attribution clauses and precision language, not extra fluff.
3) Insert 6 “skeptic questions” as inline callouts (for me, not readers) that show where reporting must increase.
4) Provide safer alternatives for 10 risky phrases (“will change everything,” “unprecedented,” “experts agree”).
5) End with a mini checklist: what a skeptical editor will ask next (right‑of‑reply, evidence artifacts, counterexamples) and how to prepare in one afternoon.
Output: Hype signals list + rewritten excerpt + skeptic questions + phrase replacements + editor‑skeptic checklist.
198. Scene Extraction (Turn Notes Into a Real Lede)
Build a lede from what you actually observed.
Inputs:
– Observation notes (paste): [NOTES]
– Key verified fact(s): [VERIFIED_FACTS]
– Thesis/angle: [ANGLE]
– Tone: [TONE]
Task: Create a scene‑to‑lede kit:
1) Extract a “scene inventory”: setting, action, tension, stake, character, object, metric, quote. Only include items present in notes.
2) Identify 5 missing scene details that would strengthen the opening (but must be reported). Provide 8 questions I can ask or observations I can make to capture them ethically.
3) Draft 5 lede options (120–170 words each) using different approaches: (A) moment of friction, (B) surprising detail, (C) tight anecdote, (D) experiment/test, (E) skeptic first. Each must connect to the nut graf setup without hype.
4) Provide 10 “sensory‑but‑accurate” phrasing patterns that avoid fabrication (e.g., “the screen flashed,” “the log showed”).
5) End with a bridge paragraph template (60–90 words) that transitions from the scene into the nut graf and reader promise.
Output: Scene inventory + missing details/questions + 5 ledes + phrasing patterns + bridge template.
199. Nut Graf Builder (Promise + Stakes + Scope)
Write the paragraph editors look for: what this is, why now, why you.
Inputs:
– Lede/opening (paste): [LEDE]
– Thesis/angle: [ANGLE]
– What I actually reported (assets): [ASSETS]
– Audience: [AUDIENCE]
– Scope limits: [SCOPE_LIMITS]
Task: Produce a nut‑graf pack:
1) Identify what my current lede promises (explicitly and implicitly) and what it fails to promise.
2) Write 6 nut grafs (70–110 words each) in different flavors: explanatory, skeptical, human‑impact, business/policy, tech‑mechanism, “what to watch.” Each must include: (a) story definition, (b) stakes, (c) “why now,” (d) what readers will learn, (e) scope boundary.
3) For the best 2 nut grafs, provide an outline‑bridge: 4 bullet points showing what the next 4 sections should cover to fulfill the promise.
4) Provide 12 “scope phrases” that keep you honest (“In [REGION],” “In the last [TIME],” “Among [GROUP]”).
5) End with a quick self‑check rubric: 8 questions to test whether the nut graf is crisp, non‑hype, and evidence‑aligned.
Output: Promise diagnosis + 6 nut grafs + 2 outline‑bridges + scope phrase library + self‑check rubric.
200. Lede Variations Pack (5 Openings, Same Truth)
Generate multiple ledes and choose the most honest hook.
Inputs:
– Verified facts (paste bullets): [VERIFIED_FACTS]
– Best quote (optional): [QUOTE]
– Thesis/angle: [ANGLE]
– Tone: [TONE] (curious/skeptical/urgent/quietly weird)
Task: Create a lede pack:
1) Write 5 ledes (120–170 words each) using distinct structures: (A) tight scene from facts, (B) surprising data point, (C) contradiction, (D) “how it works” hook, (E) skeptic‑first hook.
2) For each lede, annotate: what promise it makes, what section will pay it off, and what proof must appear within the first 400 words.
3) Provide a micro‑nut graf (40–60 words) after each lede (as a draft option), including scope and “why now.” Use placeholders for missing proof.
4) Recommend the best lede for [PUBLICATION_NAME] and explain why it fits the outlet’s vibe and your evidence.
5) End with a “hook ethics” checklist: avoid fear‑mongering, avoid naming private individuals, avoid implied wrongdoing without proof, and avoid overstating causality.
Output: 5 ledes + annotated promises + 5 micro‑nut grafs + recommended pick + hook ethics checklist.
201. Fact‑Checker Simulation (Hard Questions You Must Answer)
Stress‑test your story before an editor does.
Inputs:
– Draft excerpt (paste): [DRAFT_EXCERPT]
– Source list (paste): [SOURCES]
– Sensitive claims (if any): [SENSITIVE_CLAIMS]
Task: Run a fact‑checker simulation:
1) Extract 20–40 claims from my excerpt and ask one hard verification question per claim (“How do you know?”).
2) Flag the top 10 “blockers” (most likely to get challenged). For each: why it’s risky, what proof would satisfy an editor, and a safe rewrite that stays true to what we actually know.
3) Do a numbers audit: identify any number without denominator/time range/baseline. Ask for the missing context explicitly.
4) Do an attribution audit: find vague attributions (“experts say,” “reports suggest”) and propose precise alternatives.
5) Provide a final “evidence to‑do list” ranked by impact: what to verify first to de‑risk the piece before submission.
Output: Claim questions list + top blockers with rewrites + numbers audit + attribution audit + ranked evidence to‑do list.
202. Freebies/COI/Links Ethics Pack (Disclosure + Boundaries)
Handle freebies, affiliate links, and relationships professionally.
Inputs:
– Freebies/access received: [FREEBIES]
– Affiliate links plan (yes/no): [AFFILIATE]
– Relationships/COI: [COI_DETAILS]
– Publication norms (if known): [POLICY_NOTES]
Task: Create an ethics pack:
1) Classify each COI by risk level (low/med/high) and explain why, in plain English.
2) Write 6 disclosure options: 3 one‑sentence, 2 short paragraph (60–90 words), 1 editor‑only expanded note. Keep tone neutral, no defensiveness.
3) Provide a boundary checklist: what I can accept (e.g., temporary access) vs what I should refuse (exclusive perks, payments from subject, conditions on coverage). Use placeholders [SUBJECT] where needed.
4) Provide “affiliate link hygiene”: how to disclose, when not to use, and how to keep editorial independence. If affiliate links are not allowed, provide a non‑affiliate alternative approach.
5) End with a “credibility guardrail” list: 10 ways COI language can backfire and safer replacements.
Output: COI classification + disclosure options + boundary checklist + affiliate hygiene + guardrail list.
203. Harm‑Minimization Pass (Vulnerable People + Doxxing Risk)
Reduce harm while keeping the journalism strong.
Inputs:
– Sensitive excerpt/outline (paste): [SENSITIVE_TEXT]
– Who is at risk: [AT_RISK_GROUPS]
– What identifiers appear (location, workplace, handle): [IDENTIFIERS]
– Evidence I have: [EVIDENCE_HAVE]
Task: Create a harm‑minimization pack:
1) Identify harm vectors in my text: doxxing, retaliation, stigma, legal exposure, re‑traumatization, and “how‑to abuse” details. Explain each risk plainly.
2) Provide safer rewrites for risky sentences (2 options each): one that anonymizes, one that generalizes while preserving meaning. Mark any rewrite that requires editor approval.
3) Produce an anonymization checklist: what to remove/blur, how to use composites (if allowed), and how to avoid “unique detail” identification.
4) Create a consent and care protocol for interviews: what to explain, how to avoid coercion, how to handle distress, and when to stop reporting a detail.
5) Provide an editor note template explaining the protective steps taken and what evidence exists privately for verification.
Output: Harm vector map + safer rewrites + anonymization checklist + consent/care protocol + editor note template.
204. Writing With Numbers (Context, Denominators, Baselines)
Use numbers without misleading readers.
Inputs:
– Numbers lines (paste): [NUMBERS_LINES]
– Sources for numbers (paste): [SOURCES]
– Audience: [AUDIENCE]
Task: Create a numbers clarity pack:
1) For each line, identify what’s missing: denominator, baseline, timeframe, sample size, geography, method, or uncertainty.
2) Rewrite each line into a “defensible numbers sentence” with context and attribution. Provide two versions when useful: concise and expanded.
3) Create a “numbers context library”: 15 add‑ons that keep numbers honest (“compared with,” “over [TIME],” “in a sample of”).
4) Create a mini “numbers visualization suggestion” list (3 charts) that would clarify the story better than text, with captions and caveats.
5) End with a 12‑point numbers checklist: avoid double counting, avoid precision theater, check units, align comparisons, and cite original source not a copy‑paste blog.
Output: Missing‑context flags + rewrites + context library + visualization suggestions + numbers checklist.
205. Editor‑Ready Source Pack (Link Pack + Evidence Notes)
Make your submission easy to fact-check and edit.
Inputs:
– Sources/links (paste): [SOURCES]
– Interview list (name/role + consent): [INTERVIEWS]
– Artifacts available (screenshots/logs): [ARTIFACTS]
– Sensitive claims: [SENSITIVE_CLAIMS]
Task: Produce an editor‑ready source pack:
1) Link pack table: link → what it supports → why it’s credible → key quote/data to pull → “retrieved on [DATE]” placeholder.
2) Interview log: person/role → date → on‑record/background → key claims → proof needed → follow‑up request.
3) Artifact inventory: artifact → what it proves → privacy redactions needed → shareability (public/editor‑only).
4) “Sensitive claims note” template: bullet list of sensitive areas, corroboration summary, and right‑of‑reply attempts.
5) Identify missing sources for my top 10 claims and recommend the most efficient next evidence to obtain (document/data/expert).
Output: Link pack + interview log + artifact inventory + sensitive claims note + missing‑evidence action list.
206. Responding to Edits (Professional, Fast, Accurate)
Turn editor comments into clean revisions without drama.
Inputs:
– Editor notes (paste): [EDITOR_NOTES]
– Draft excerpts (paste): [DRAFT_EXCERPTS]
– Deadline for revisions: [REVISION_DEADLINE]
Task: Build a revision response pack:
1) Classify each note: structural, clarity, fact, tone, legal/ethics, sourcing, visuals. Prioritize them (must/should/nice).
2) Create an action plan: what to change, where, and what new evidence is required. Mark evidence tasks with [NEED_SOURCE] or [NEED_QUOTE].
3) Draft a response email (120–170 words) that confirms understanding, lists any questions, and sets expectations for delivery.
4) Provide a “change log” table template: comment → action taken → link to evidence → status. Include a “needs editor decision” column.
5) Provide 8 respectful question templates for when an edit could change meaning or introduce risk.
Output: Classified notes + action plan + response email + change log template + question templates.
207. Line‑Edit Rhythm Pass (Clarity + Cadence)
Make sentences punchy, precise, and readable.
Inputs:
– Draft excerpt (paste): [DRAFT_EXCERPT]
– Tone target: [TONE] (crisp/skeptical/curious/human)
– Words I overuse: [OVERUSED]
Task: Produce a line‑edit pack:
1) Rewrite the excerpt with: shorter sentences where appropriate, stronger verbs, fewer filler words, and clearer subject→verb→object flow.
2) Provide a “repeat removal” list: phrases I repeat + 2 alternatives each.
3) Provide a “jargon audit”: terms that need definition or replacement. Add a one‑line definition suggestion for each.
4) Create a before/after table for 12 lines: original → improved → why it works (clarity, rhythm, precision).
5) End with a personal micro style guide (10 rules) derived from my excerpt (e.g., avoid stacked prepositional phrases, limit parentheticals, prefer concrete nouns).
Output: Edited excerpt + repetition fixes + jargon audit + before/after table + micro style guide.
208. One Idea, Three Sections (Angle Adaptation)
Adapt the same reporting to different editorial sections.
Inputs:
– Core idea: [CORE_IDEA]
– Evidence/assets: [ASSETS]
– Access advantage: [ACCESS]
– Word count range: [WORD_COUNT]
Task: Produce an adaptation pack:
1) Define 3 section formats: (A) feature (narrative + mechanism), (B) service (how‑to + decision help), (C) business/policy (incentives + implications).
2) For each format, write: reader promise, “why now,” and a 7‑section outline with evidence placeholders.
3) For each format, list the minimum reporting required and what I already have vs what I must still get.
4) Write 3 separate pitch emails (190–230 words each) with subject lines. Each pitch must name the format explicitly and include an honest access statement.
5) End with a “no‑drift checklist”: 10 rules to keep claims consistent across the three versions.
Output: 3 format definitions + 3 outlines + reporting gaps + 3 pitch emails + no‑drift checklist.
209. Share Lines Pack (Social/Newsletter Teasers Without Clickbait)
Write teasers that are truthful and still compelling.
Inputs:
– Reader promise: [READER_PROMISE]
– Verified details (bullets): [VERIFIED_DETAILS]
– Skeptic point: [SKEPTIC]
– Tone: [TONE]
Task: Create a share pack:
1) 12 social posts (1–2 sentences each) with different angles: curiosity, contradiction, practical value, “what surprised me,” and “what to watch.” No emojis unless asked.
2) 6 newsletter blurbs (45–70 words) that explain what readers will get, include one specificity detail, and avoid hype.
3) 8 alternative headlines for sharing (not necessarily the article headline) that stay truthful.
4) A “truth filter”: for each share line, add a note: what exact section/evidence in the article backs it up. If missing, mark [NOT_BACKED] and rewrite the line.
5) End with a clickbait‑avoid checklist: 12 phrases to avoid and safer replacements.
Output: Social posts + newsletter blurbs + share headlines + truth filter + clickbait‑avoid checklist.
210. Submission + Follow‑Up Protocol (Polite Persistence)
Send, follow up, and move on without burning bridges.
Inputs:
– Pitch email draft (paste): [PITCH_EMAIL]
– Outlet guidelines (paste): [GUIDELINES]
– My timeline/urgency: [TIMELINE]
– Simultaneous submission policy (if known): [SIMULTANEOUS_POLICY]
Task: Create a submission protocol pack:
1) A “send checklist”: subject line, pitch length, links, clips, access statement, reporting plan, and what attachments to avoid unless requested.
2) A follow‑up schedule with specific day counts and message types (gentle ping, value‑add update, final close‑the‑loop). Use placeholders [DAYS] and keep it respectful.
3) Write 3 follow‑up emails (each 60–110 words): (A) gentle, (B) update with new reporting asset, (C) close‑the‑loop that invites future ideas.
4) Provide a “no reply” decision tree: when to re‑angle, when to pitch elsewhere, and how to avoid policy violations.
5) End with a simple tracking table template (outlet, editor, date sent, follow‑up dates, status, notes) and naming rules for my files.
Output: Send checklist + follow‑up schedule + 3 follow‑up emails + no‑reply decision tree + tracking table template.
Built for beginners pitching WIRED-like magazines: prioritize clear stakes, real reporting, and strong ethics.