SOP (Standard Operating Procedure) · Ethics & AI · Data Collection Before You Write

Ethics & AI SOP — collect plagiarism rules, citation rules, disclosure requirements, and acceptable AI assists before you write for any website

You want to write blog posts, articles, guest posts, or even journal-style pieces for serious websites and magazines, and you want to earn money without ever worrying that an editor will accuse you of plagiarism or of hiding AI use. This SOP gives you a calm, step-by-step way to collect each publication’s ethics and AI rules before you draft. You will open a set of predictable pages, you will skim them with purpose, and you will copy short sentences into your own notes. Then you will decide how you can safely use AI tools for ideas, outlining, and line editing while still keeping full human authorship and full responsibility for your work.

Think of this as a “safety and trust intake” that you run on every outlet, whether it is a big tech magazine like WIRED, a niche blog, or an academic-style publication. Once you have this one-page intake filled in, you can say with confidence that your draft will respect three things at the same time: the reader’s trust, the editor’s standards, and your own long-term reputation as a professional writer.

Plagiarism rules Citation & sources AI disclosure Acceptable AI assists Data-collection before drafting
Your Goal Know exactly what counts as plagiarism, how to cite, and how to disclose AI before you write a single line.
Your Reader Give readers fact-checked, clearly sourced, human-led writing with honest notes about tools you used.
Your Win Protect your byline, keep editors happy, and build a portfolio that you can proudly show for years.
Step-by-step

The 12-minute Ethics & AI intake before you write for any website

In this routine you will open a focused set of pages and you will capture key lines that describe how a publication thinks about originality, sources, and AI. You are not drafting and you are not pitching yet. You are just learning the rules of the house so that your future draft does not accidentally break anything. When you repeat this intake for each outlet, using the same order every time, your notes become consistent, your habits become honest by default, and your income is safer because you do not lose opportunities to avoidable ethics problems.

Open tabs
Skim & capture rules
Summarise & decide

12-minute Ethics & AI Intake — minute by minute

0:00–1:00 Open tabs and name your intent.
  1. Open the outlet homepage (for example, [TargetSite].com).
  2. From the footer, open About / Editorial standards, Terms of use, Privacy, and any Ethics / AI / Plagiarism / Guidelines / Submissions pages you can find.
  3. If there is a separate Author guidelines or Guide for contributors link, open that too. For academic-style outlets, open the journal’s “Ethics” and “AI policy” sections if they exist.

Intent line (write this once in your notebook): “I will collect plagiarism rules, citation rules, AI disclosure rules, and acceptable AI assists for [Outlet] so I can use AI safely and still remain fully responsible for my own writing.”

1:00–2:30 Capture how the outlet defines plagiarism and originality.
  1. On the ethics or plagiarism page, look for a definition that explains what counts as copying, self-plagiarism, or duplicate submission.
  2. Copy one sentence or short bullet into your notes that sums up their definition in their own words.
  3. Write a simple line in your voice: “For this outlet, plagiarism mostly means [copying text / copying ideas / copying structure] and they expect [original reporting / fresh angle / honest quotation marks].”
Warning: If the site has no visible plagiarism policy, you will assume strict standards. Serious editors often follow industry ethics codes even if they do not spell them out on the website.
2:30–4:00 Note what they say about quotation, citation, and sources.
  1. Scan the author guidelines and editorial standards for words like “quote,” “citation,” “reference,” “sources,” or “linking policy.”
  2. Write one line on how they expect you to show where information came from (for example, inline links to primary sources, formal references, or a simple “according to… ” style).
  3. Write one line on how they want you to handle direct quotes (quotation marks, speaker names, context, and sometimes recorded consent for interviews).
Quick habit: In your draft later, you can imagine an invisible label after every fact that answers “From where?” and you will already have the answer logged in your notes.
4:00–5:30 Find their stance on generative AI tools.
  1. Search inside each open page using your browser’s Ctrl+F or Cmd+F for “AI”, “artificial intelligence”, “ChatGPT”, “large language model”, or “machine learning”.
  2. Copy any sentences that clearly say what is allowed (for example, grammar help, language polishing, translation, outline support).
  3. Copy any sentences that say what is not allowed (for example, “do not submit AI-generated text as your own work”).
AI use Banned
AI use Allowed (with rules)
5:30–7:00 Capture where and how they want AI use to be disclosed.
  1. Look for phrases like “disclose AI use”, “state in a note”, “acknowledgements”, “methods section”, or “editor’s note”.
  2. Write a one-line summary: “If I use AI for [type of help], I must disclose it in [place in the piece or submission form] using [style or example if they give one].”
  3. If they do not mention AI at all, write: “No explicit AI policy mentioned — treat as human-written only, ask editor later if in doubt.”
Rule of thumb: When a site is silent, you stay more transparent, not less. A short honest note is much safer than a hidden tool.
7:00–8:30 List acceptable AI assists vs “red lines”.
  1. On the basis of what you have read, draw your own two lists in your notebook:
    • “OK assists” (for example, grammar checks, rephrasing your own text, translating your own notes, outlining ideas you already had).
    • “Red lines” (for example, generating whole articles, fabricating quotes, inventing sources, uploading confidential documents into public tools).
  2. Write one sentence that explains the difference in your head: “AI tools may [support tasks] but must not [replace core reporting and original writing].”
OK Assist Grammar, spelling, readability suggestions on text you already wrote.
OK Assist (with care) Brainstorming angles, outlines, and questions you will fact-check yourself.
Caution Zone Summarising long sources; you must verify and link to originals.
Red Line Submitting AI-generated paragraphs as if you wrote them from scratch.
8:30–10:00 Check privacy, data, and image rules that affect AI use.
  1. On the privacy policy, look for warnings about uploading personal, confidential, or unpublished data into third-party tools.
  2. On the images or media guidelines, note whether AI-generated images, composites, or synthetic media are allowed, and under what conditions.
  3. Write one line like: “For this outlet I must [never upload interview transcripts / anonymise data / avoid AI images unless cleared] before I use any external tool.”
Protect yourself: If the policy warns against sharing unpublished or sensitive information with external services, do not paste full drafts or raw interview notes into any AI tool that stores or uses prompts to train models.
10:00–12:00 Do a quick feasibility & integrity gauge for your own workflow.
  1. List three tasks where AI could help you while staying inside the outlet’s rules (for example, headline variants, outline options, grammar smoothing).
  2. List three tasks where you commit to staying fully human (for example, reporting, final wording of key claims, choice of quotes and facts).
  3. Write a one-line internal promise: “For this outlet my AI use will stay at [light / medium] assist level and I will remain clearly responsible for all facts, structure, and final wording.”
Ethics & AI comfort meter — adjust your habits until the needle feels safe
Map

What you collect in one sitting (and how it protects you)

At the end of one 12-minute intake you will have ten data groups. Together they show you how to stay original, cite properly, use AI tools safely, and talk honestly about your process. That mix protects your time, your byline, and your chances of getting repeat paid assignments.

Group What to write (one line each) Where you find it
Plagiarism definition “For [Outlet] plagiarism means [copying text / ideas / structure] and they forbid [specific behaviour].” Ethics / Editorial policy / Plagiarism page
Originality expectations “They expect [original reporting / new angle / new synthesis], not recycled or spun content.” Author guidelines, submissions page
Citation style “Show sources using [inline links / references / footnotes] and prefer [primary / high-quality sources].” Instructions for authors, style guide
Quote handling “Direct quotes need [quotation marks, names, context] and sometimes [recorded consent].” Interview policy, ethics notes
AI allowance “Generative AI tools may be used for [editing / translation / idea support] but may not [write the article].” AI policy, ethics, author guidelines
AI disclosure “If AI is used, disclose in [note / acknowledgements / cover form] with [simple wording or format].” AI or technology policy, submission form
Data & privacy “Do not upload [personal / confidential / unpublished] material into third-party tools without permission.” Privacy policy, newsroom AI guidelines
Image & media rules “AI-generated images are [allowed / restricted / banned] and require [labels / credits / extra checks].” Photo / graphics / visual standards page
Enforcement & consequences “If rules are broken, they may [reject / retract / ban contributors] and may inform [other editors / institutions].” Corrections, misconduct, or sanctions policy
Personal boundaries “For this outlet I personally will let AI help with [safe tasks] but I will keep [core tasks] fully human.” Your internal notes (this SOP)
Minimum viable ethics intake: If you are short on time, capture at least five lines: plagiarism definition, citation style, AI allowance, AI disclosure, and consequences. These five items alone prevent most serious problems for working writers.
Fill this template

Template_01: Website Ethics & AI Canvas — [Editable] Fill with your own data

Note: Replace all [green] highlights with information from the specific outlet you want to write for.

Copy this block into your notes and fill it in using complete but short sentences. The goal is to see, on one screen, how you will stay original, honest, and safe while still using AI as a helper in a clear and responsible way.

House definition: For [Website], plagiarism means [copying text / copying ideas / copying structure without credit].
Originality expectation (1 sentence): They expect each article to provide [new reporting / fresh angle / new synthesis of trusted sources] that has not appeared elsewhere on their site.
Self-plagiarism / duplicate submission: They [allow / do not allow] you to reuse your own previously published material and they require [full disclosure / exclusive submissions].
Tip: paste one key sentence from their plagiarism or ethics page so you can refer back to their wording.
Preferred citation method: Use [inline links / formal references / footnotes] to credit sources for facts, data, and quotes.
Source quality rule: They prefer [primary research / official statistics / direct interviews / recognised experts] over [blogs / anonymous social posts].
Quote handling: Direct quotes must include [quotation marks, speaker name, role, and context].
Paraphrase rule: When paraphrasing, you must still [credit the original source] and avoid copying distinctive wording.
Allowed types of AI assist: You may use AI tools for [grammar / spelling / language polishing / translation / outline suggestions].
AI must not do: AI must not [write full articles / invent quotes / fabricate sources / decide main arguments].
Human responsibility line: The outlet states that the human author remains responsible for [all facts, reasoning, originality, and final wording] even when AI is used.
Your own comfort level: You will personally keep AI use at [light / medium] level for this outlet.
Disclosure required? This outlet [requires / encourages / does not mention] explicit disclosure of AI assistance.
Disclosure location: AI use should be mentioned in [author note / acknowledgements / methods section / submission form].
Disclosure content: You should say that AI was used for [editing / translation / idea support / image generation] and confirm that you verified all content yourself.
Extra approvals: For sensitive content, you must [seek editor approval / follow additional newsroom rules] before any AI-assisted workflow.
Confidentiality rule: You must not upload [personal data / unpublished manuscripts / private interviews / embargoed information] into external AI tools.
Media & images: AI-generated images are [allowed with clear labelling / allowed only for illustration / banned for news photos].
Storage & logs: The outlet warns that some tools may [store prompts / reuse data for training] and expects you to avoid tools that break privacy expectations.
Your safety rule: For this outlet you will only use tools that you understand and that keep [source materials and interview data] safe.
Potential actions: If plagiarism or hidden AI use is discovered, they may [reject / retract / correct / ban contributors / inform institutions].
Detection tools: They may use [plagiarism checkers / AI-detection tools / manual review] on submissions.
Risk rating (1–5): You rate this outlet’s strictness on AI & plagiarism as [1–5] where 5 is very strict.
Your personal safeguard: Before every submission here you will run a quick self-audit on [plagiarism / sources / AI use / disclosure] using this SOP.
Pro tip: Write like you expect an ethics editor to read your notes. Clear, honest sentences now save awkward emails later.
Pre-Filled · Demo Example

Example Canvas — Fictional “TechEthic Weekly” (inspired by serious tech magazines)

This is a fictional but realistic example so you can see how your finished canvas might look when you analyse a serious technology and science outlet. Do not copy this into a real pitch; instead, use it as a model when you fill in data for your own target website (for example, a tech magazine, a niche blog, or a policy journal).

House definition: For TechEthic Weekly, plagiarism means copying sentences, paragraphs, or distinctive structures from other sources without quotation marks and credit, including copying your own previously published work without disclosure.
Originality expectation: Each piece must add something new: fresh reporting, a new angle on existing research, or a synthesis that clearly builds on named sources instead of hiding behind generic “experts say” language.
Self-plagiarism / duplicate submission: They forbid submitting the same article to multiple outlets at once and expect you to tell them if parts of your draft are adapted from older work or from your own newsletter.
Preferred citation method: TechEthic Weekly wants clean inline links to primary documents, data dashboards, research papers, or official statements, with the linked text describing what the reader will see.
Source quality rule: They prefer peer-reviewed studies, official reports, and direct interviews with named experts. Blog posts and corporate marketing materials may be mentioned but do not count as core evidence for strong claims.
Quote handling: Direct quotes must include the person’s full name, role, and relevant affiliation the first time, with clear context about when and how the quote was given (interview, email, public talk).
Paraphrase rule: Even when you paraphrase an idea from a study or article, you still link and name the source so readers can trace the reasoning back to the original work.
Allowed types of AI assist: Writers may use AI tools to brainstorm questions, outline possible structures, translate their own notes, and polish grammar and clarity in the final draft.
AI must not do: The guidelines explicitly state that AI tools must not generate publishable text, news copy, or quotes and must not be used to summarise sources that the writer has not personally read.
Human responsibility line: A human writer remains entirely responsible for the accuracy, fairness, and originality of every sentence, even if an AI tool suggested words or phrasing along the way.
Your own comfort level: For this outlet, you decide to let AI help with outline prompts and language polishing, while keeping all reporting, structure decisions, and final paragraphs fully human-written.
Disclosure required? TechEthic Weekly expects AI use to be disclosed for substantial assistance, such as translation, structural suggestions, or data-driven summaries, even though light grammar tools do not require a note.
Disclosure location: They ask for a short sentence in the contributor notes to the editor on submission, and, for explainers that rely heavily on translation or summarisation tools, a small acknowledgement at the end of the piece.
Disclosure content: The sample wording they suggest is along the lines of “The author used AI-assisted tools to help with translation and language editing. All facts, interpretations, and conclusions are the author’s own.”
Extra approvals: For investigations or sensitive stories, they expect writers to clear any AI-assisted workflows with the assigning editor before using them at all.
Confidentiality rule: Their policy bans uploading unpublished manuscripts, confidential documents, or identifiable personal data into any third-party AI system that retains prompts or trains on user inputs.
Media & images: AI-generated illustrations are allowed only for clearly labelled conceptual art, never for news photographs or evidence. Synthetic faces, synthetic “photojournalism”, and misleading composites are not allowed.
Storage & logs: They remind contributors that many AI tools keep logs of prompts and that writers must avoid tools that cannot guarantee reasonable privacy for notes and draft fragments.
Your safety rule: You commit to using only tools that offer a private or non-training mode, and you will never paste full interview transcripts or private emails into an AI system.
Potential actions: TechEthic Weekly reserves the right to reject submissions, retract published pieces, append corrections, or stop commissioning writers who break plagiarism or AI rules.
Detection tools: They use a combination of human editing, plagiarism-detection services, and internal checks for unusual style patterns or unverifiable sources.
Risk rating (1–5): You rate their strictness as 5/5 — which is good, because strictness is a sign that they care about their readers and will value careful writers.
Your personal safeguard: Before every submission, you will check your draft for copied phrasing, confirm each link and quote against the original source, and note any AI-assisted steps in a small private log.
Internal one-line brief: “For TechEthic Weekly I use AI only to tidy my own text and brainstorm structures, while all facts, angles, and sentences are checked, written, and owned by me as a human author.”
Search the right pages

Where to find plagiarism, citation, and AI rules in under ten clicks

Most outlets scatter their ethics details across multiple pages. This map shows you which pages usually hide the important lines, so you can open and scan them in a fixed order instead of clicking at random every time.

About / Editorial Standards Mission, values, broad promises about accuracy, fairness, and originality.
Author Guidelines / Submissions Plagiarism warnings, citation expectations, and sometimes AI-use paragraphs.
Ethics / AI / Technology Policy Specific rules on AI tools, disclosure, conflicts of interest, and corrections.
Terms & Privacy Rules about data sharing, content ownership, and what you may not upload to external services.
Corrections & Complaints Clues about how they respond to plagiarism or AI-related complaints.
Sample Articles & Author Pages Real-world examples of how they use quotes, links, and any AI-related notes.

Signal heatmap (5 = strongest, 1 = weakest)

1 (weakest)
2
3
4
5 (strongest)
Guidelines → Plagiarism
Guidelines → Citations
AI Policy → Allowed uses
AI Policy → Disclosure
Terms → Data uploads
About → Ethics
Privacy → Logs & storage
Corrections → Sanctions
Sample pieces → Real citations
Sample pieces → Tool notes
Footer → Misc policies
Homepage → AI rules
Cross-check rule: If one page is fuzzy, look for the same topic on a second page. For example, if the AI policy is vague, check the general ethics guidelines to see how they talk about authorship and responsibility.
Originality

Plagiarism, paraphrasing, and patchwriting — a quick grid for AI era writing

Generative AI makes it very easy to fall into “patchwriting”, where you copy the structure and key phrases of a source or of AI output without really creating your own explanation. This grid helps you see the difference between safe behaviour and risky behaviour before you start using tools.

Behaviour What it looks like Risk level What you should do instead
Direct quotation with credit You copy a sentence word-for-word, put it in quotation marks, and name the source with a link or reference. Low Use sparingly for powerful statements. Surround quotes with your own explanation and context.
Honest paraphrasing with credit You read a source, then explain the idea in your own words and still mention where it came from. Low Check that your wording is genuinely different and that you have not just swapped a few synonyms.
Patchwriting from a source You keep the same sentence structure and order of ideas, changing only a few words, and you may or may not mention the source. High Close the tab or AI window, summarise ideas roughly in your notes, then write your explanation later from memory with a fresh structure.
Copying AI output as-is You ask a tool to “write an article” and then submit large chunks of that output unchanged, with no disclosure. Very high Use AI only for support tasks. Draft in your own voice, then, if allowed, use tools for language cleanup and clearly disclose substantial assistance.
Idea mining without copying wording You ask an AI tool for possible angles or questions, then choose a few and write everything yourself, verifying facts from primary sources. Moderate Keep a note of which angles came from tools, and make sure your facts come from human-checked sources, not from AI guesses.
Important: Many outlets now treat undisclosed AI-generated text the same way as traditional plagiarism, because someone else’s model produced those words and you took authorship without saying so. Use tools, but keep the human brain — yours — clearly in charge and clearly visible.
Assist vs. Authorship

AI assist matrix — decide what you will and will not automate

Not every AI use is equal. Some uses are simple productivity boosts that keep you well inside ethical boundaries. Others shift authorship or invent facts. In this matrix you decide, for each task, how comfortable you are using AI and what the outlet allows.

AI intensity (none)
AI intensity (heavy)
Task AI use allowed? Your decision Notes
Brainstorming angles & titles [Outlet] [allows / not mentioned] You will [sometimes / never] use AI to spark options. Always choose final angle yourself and adjust for the outlet’s audience and section.
Outlining structure [Outlet] [allows AI support / wants human-planned structure] You will let AI suggest 2–3 outline shapes then design your own final outline. Check that your final outline matches the outlet’s typical article patterns.
Grammar and clarity editing Most outlets allow tools for line-level language checks. You will happily use AI to polish sentences you already wrote. Re-read every suggestion; do not accept changes that alter meaning or introduce errors.
Drafting full paragraphs Many serious outlets discourage or forbid AI-written text. You will draft all paragraphs by hand and keep AI away from main writing. If you ever experiment, you will treat the output as a rough prompt and rewrite completely in your own words.
Summarising sources Some outlets allow this if you check against the original. You will copy key sections from the original source into your notes and then write your own summary; AI is optional and checked. Never rely on AI summaries alone for scientific papers, legal documents, or sensitive topics.
Money angle: A clear AI assist matrix keeps you fast without crossing ethical lines. Editors learn that you are efficient and trustworthy, which leads to repeat work and higher rates over time.
Self-audit

AI use log — one small table to protect your future self

A simple log of how you used AI on each assignment helps you answer questions later if an editor or reader asks. You do not need a complex system. You just need a place where you wrote, in plain language, what you did and how you checked it.

Piece / Outlet AI tool(s) What task they helped with How you verified Disclosure needed?
[Working title for article] [Tool name or “none”] [Brainstormed angles / outline / grammar / translation] [Checked all facts against sources / re-read all edits / compared summary to original] [Yes → note in submission / No → light grammar only]
[Second article] [Tool name] [Drafted interview questions] [Manually rewrote questions and checked they were fair and clear] [Probably no, but keep internal log]
Five-minute safeguard: Fill this log right after you finish a draft while the steps are still fresh. Future you will be very grateful if a question ever comes up about how the piece was produced.
Advanced Section · Skippable · Ethics & AI

Advanced Ethics & AI Data Collection — keep a clean record of every assist before you publish

You already understand the basics of plagiarism, citations, and being honest about your AI use. This advanced part of the SOP helps you build a simple, low-stress system so you can track how you use AI tools, protect your originality, respect other people’s work, and prove your integrity when an editor or journal asks. You will not rely on memory. You will keep calm notes about which tools you used, on which parts of your draft, which sources you checked, and how you made sure the final text is your own thinking in your own words. This way you stay safe on professional outlets, serious blogs, magazines, and journals, even when rules keep changing.

Ethics & AI Plagiarism & Citations AI Disclosure Acceptable AI assists Data collection only
Map

AI use spectrum canvas — acceptable, risky, and forbidden assists

Different websites and journals draw the line in different places, but the questions stay the same: What can AI help you with? What needs disclosure? What must you avoid completely? You will keep your own spectrum so you never have to guess in the middle of a deadline.

Generally acceptable (with care) Brainstorming angles, clarifying outlines, turning bullet points into questions, translating your own text for understanding, drafting interview questions.
High-risk (needs rules + disclosure) Summarising sources, checking grammar and tone, suggesting alternative phrasings, generating code or formulas, drafting tables or captions.
Forbidden / almost always banned Generating full drafts you copy-paste, inventing data or citations, mimicking a specific writer’s voice, rewriting published articles, hiding AI use in any part of the manuscript.

AI use matrix — note what your target outlet allows

Task Your note for this outlet Typical risk level
Idea brainstorming [Allowed? Needs disclosure? Any limits?] Low — but avoid copying wording directly.
Outline help [Can you ask AI for structure hints?] Low–medium — make sure final structure is your choice.
Language polishing [Is AI editing mentioned in guidelines?] Medium — may need disclosure or full manual review.
Drafting full paragraphs [Almost always discouraged or banned for serious outlets.] High — plagiarism and originality concerns.
Summarising research papers [Allowed only with manual cross-check?] High — risk of errors and missing context.
Generating citations or references [Many outlets ban AI-generated references.] High — hallucinated sources are common.
Creating images / charts [Check policy on AI images, credit, and consent.] Medium–high — copyright and consent issues.
Important: This matrix is your personal note. You always follow the specific rules of the outlet, the journal, your university, or your client. When their policy conflicts with your habits, you adjust your habits, not their rules.
Template · Fill this

Template_02: AI assist log — how you used AI for this article

How to use: For each article, blog post, magazine piece, or journal manuscript, copy this block into your notes and replace the [green place-holders] with your true details in complete sentences. This becomes your proof that you used AI tools carefully and honestly.
Outlet / website: [Name of publication or blog]
Working title: [Project title]
Format & section: [e.g., 1,200-word guide in “Technology” section]
Draft date: [YYYY-MM-DD]
Tool(s): [Name and version] (example: [ChatGPT-5.1]).
Access type: [Browser / API / plugin / built-in editor tool].
Other tools alongside AI: [style guide, grammar checker, plagiarism checker].
Brainstorming: [Yes/No — describe briefly, e.g., suggested subtopic list].
Planning & outlining: [Yes/No — how much of outline came from AI?].
Language polishing: [Yes/No — which parts: intro, headings, transitions?].
Summarising sources: [Yes/No — which sources, and did you cross-check manually?].
Core argument / narrative: [Describe how you built it yourself from sources].
Key examples / case studies: [You chose and described them in your own words].
Final structure & headline: [You made the final decisions].
Fact-check path: [Web search / original documents / journal articles / interviews].
Bias checks: [Which groups or perspectives did you consciously review?].
Plagiarism scan: [Tool and date, plus quick note on result].
Outlet rule: [Where they want you to disclose AI use].
Your chosen spot: [Methods / end note / footnote / cover letter].
What you will state: [Simple description of which tasks used AI].
Pro tip: Fill this log right after each focused work session. You will forget details if you wait until the night before submission.
Originality

Originality defence — three walls between you and plagiarism

Plagiarism is not only copy-paste from one article. It also includes close paraphrasing, uncredited ideas, and now unlabelled AI text that quietly echoes somebody else’s work. You will build three simple “walls” that make plagiarism very unlikely in your workflow.

Wall 1 — Source notes

You keep a list of sources with full details and short, handwritten summaries in your own words before you open any AI tool.

Wall 2 — Concept first, wording later

You decide on the argument, structure, and examples yourself based on your notes, and only then you ask AI for help on clarity or extra angles.

Wall 3 — Manual check + tools

You read your draft line by line, check against your sources, and then run plagiarism / AI-overlap tools as a second opinion.

Originality log — one row per core section

Section of your piece Main sources you used AI involvement Your originality note
Introduction [Source A, Source B, your own experience] [No AI / AI helped with wording only] [Why this intro is your own framing]
Key argument 1 [Research paper, interview, report] [AI offered outline suggestions] [How you combined and re-expressed ideas]
Example / case study [Company report, news article] [No AI] [Your own explanation and interpretation]
Conclusion [Your synthesis of all the above] [AI suggested alternative phrasing] [You kept your own opinion and deleted AI phrasing you did not like]
Guardrail: If any paragraph comes entirely from an AI suggestion, without your own re-thinking, you rewrite it. If you cannot explain a paragraph in your own simple words, you do not submit it.
Citations

Citation planner — who deserves credit when AI is in the mix

When AI tools help you, you still must give credit to the human work underneath. This section helps you separate three things: the original source of ideas or data, the AI tool that rearranged or summarised your notes, and your own interpretation. You will capture all three so your references stay clean and transparent.

Scenario Who you must cite What you log for this SOP
You read a research paper and then ask AI to summarise it for you. The research paper (original authors), not the AI tool. Note the paper’s full details and your own summary; note that AI helped you check understanding.
You ask AI for a list of statistics and then verify each number in official reports. The official reports or datasets where you verified the stats. Record both the tool prompt and the final human sources for each number.
You ask AI to tidy the language of a paragraph you wrote from your own notes. No new source, but some outlets still want AI mentioned in a note or methods section. Log which paragraph you polished and whether any wording was significantly changed.
You ask AI for “five key arguments” about a topic and take one idea you had not seen anywhere else. No clear human author, but you still must check if similar ideas already exist in the literature. Log this idea separately and search manually; if you find a close match, cite the human source.
You let AI invent a reference list and later discover some sources do not exist. You cannot cite non-existent sources. You must delete or replace them. Log this as a “hallucination incident” so you remember to avoid this pattern next time.
Simple rule: If information or wording came from a specific human-created work you can identify, you cite that work. AI tools are helpers, not authors. You may still need to disclose tool use, but you do not treat AI as a co-author or as the “source.”
Transparency

Disclosure planner — where and how your AI use will be visible

Many publishers now ask you to disclose your AI use clearly. Some want a short note in the methods, some prefer the acknowledgements, and some prefer a statement in your cover letter only. You will map each outlet’s preference so you never forget to be transparent.

Name: [Publication or journal name]
Policy link: [URL to AI or ethics page]
Disclosure location: [Methods / Note / Acknowledgements / Cover letter]
Required details: [Tools, versions, tasks, dates, or “none allowed”]
Name: [Publication or blog name]
Policy link: [URL]
Disclosure location: [e.g., end of article in italics]
Required details: [Short description of support tasks only]
Name: [Journal / magazine]
Policy link: [URL]
Disclosure location: [Special section or field in submission system]
Required details: [Exact wording they suggest, if any]

Red-flag situations you always disclose

Situation Why disclosure is needed Your note
AI helped summarise long or complex sources. Readers need to know how you processed the evidence. [Where you will mention it and how you checked accuracy.]
AI helped with translation or language for non-native writing. Transparency protects you from accusations of ghostwriting. [Which sections were translated or smoothed, and how you verified meaning.]
AI suggested structure or headings. Editors may want to know that the shape was AI-assisted. [How much of the final structure is still your decision.]
AI suggested alternative wording for quotes or paraphrases. Mis-representation of sources is a serious ethical issue. [You checked each quote against the original and kept the source’s meaning.]
Never do this: Do not quietly remove AI-disclosure notes because you are afraid of rejection. Editors are increasingly strict about undeclared AI use. Honest disclosure is safer, even if it means your piece needs extra review.
Responsibility

Authorship & credit — you stay responsible, AI stays a tool

Most serious publishers agree on one thing: AI tools cannot be authors because they cannot take responsibility for the work or sign legal agreements. In practice, this means you keep full responsibility for everything the AI helped you draft or polish. This section helps you track who did what in collaborative or multi-author projects.

Element of the work Who is responsible What to log
Idea and research question Human authors only [Who proposed the idea, and which sources or events inspired it.]
Method, outline, and argument Human authors, possibly with AI suggestions [How you turned suggestions into a final plan; which parts you accepted or rejected.]
Draft text Main author(s) responsible for every word [Where AI assisted, which prompts you used, and what you rewrote yourself.]
Citations, references, data Human authors only [How you verified each citation and dataset; any AI hallucinations you corrected.]
Final approval All listed authors [Date each author read the final draft and agreed to its content.]
Shared document habit: On multi-author projects, add a short “AI use” note in your shared document where each author lists which tasks they completed with or without AI. This keeps everyone aligned and avoids surprises at submission time.
Sensitive Areas

Data, images, and sensitive topics — where AI use is especially risky

Some tasks are more sensitive than others. Generating fake data, editing images that document real events, or writing about vulnerable groups with AI-generated wording can cause serious harm. You will keep a short risk map so you remember to minimise or avoid AI use where it matters most.

Research data and results Do not use AI to create or manipulate original data, experiment results, measurements, or research images. These must reflect reality, not a model’s guess.
Photos, people, and consent Be careful with AI-edited photos of humans, especially in news or case studies. Always follow consent, privacy, and outlet policies before you alter any person’s image.
Health, law, finance AI outputs in medical, legal, and financial topics may be dangerously wrong. Use AI only for planning or wording and always rely on verified expert sources.
Generic examples & metaphors AI is safer when suggesting metaphors, headlines to consider, or neutral examples you later adjust.

High-risk AI use log

Topic or asset AI use Extra checks you performed Decision
Data table or graph [Asked AI to suggest visual formats only] [Verified all numbers from original dataset] [Approved / rewrote / dropped]
Photo illustration [Used AI to create a symbolic, non-real image] [Checked outlet’s policy, added clear label if required] [Approved / changed / removed]
Case study involving vulnerable group [No AI on wording; AI used only for outline] [Sensitivity read, expert review, or extra manual edit] [Approved]
Rule of thumb: The more your work affects real people’s lives, the less you rely on AI for content generation and the more you rely on primary sources and human experts.
Risk

AI task risk heatmap — see your danger zones at a glance

Different AI tasks carry different levels of risk for plagiarism, misrepresentation, or bias. This heatmap gives you a quick visual reminder. You can adjust it for each new outlet or project.

1 — Lowest risk
2
3
4
5 — Highest risk
Brainstorm topics
Outline suggestions
Summaries of sources
Full draft generation
Citation & reference lists
Grammar & style edits
Headline ideas
Metaphors / analogies
Data creation / modification
Image generation
Code snippets
Sentiment / bias analysis
How to use this: For each project, circle or note the tasks where you plan to use AI. If most of them are level 4 or 5, you slow down, add more manual checks, and consider reducing your AI use.
Tools

Plagiarism & AI-detection tools log — treat them as advisors, not judges

Many editors and institutions use plagiarism and AI-detection tools. These tools are not perfect, but you can still use them as early warning systems. This section helps you record which tools you used, what they reported, and how you responded.

Tool Check run Headline result Your manual interpretation Action you took
[Plagiarism checker name] Full draft / selected sections [e.g., 6% similarity; highlighted phrases] [Why highlighted parts are safe or need rewriting] [Rewrote intro, added citation, or accepted as common phrase]
[AI-content detector] Full article / sensitive parts [e.g., “likely mixed” or “highly AI-like” in some sections] [You reviewed those areas, compared with your notes and sources] [Rephrased paragraphs, added disclosure, or kept with confidence]
[Grammar / clarity checker] Full article [Suggested many small edits] [You accepted only changes that did not change meaning] [Recorded date and version of final draft]
Safety note: Do not write “tool said it is fine” as your only defence. Your own understanding of the sources and your line-by-line review are still the main protection for your reputation.
Checklist

Master Ethics & AI checklist — one page to print and tick

Before you send your work to any editor, client, blog, or journal, you run this checklist once. If you cannot tick a box honestly, you pause and fix the gap. This habit protects your future income, your bylines, and your credibility.

Area Question Tick when true
Sources I have a list of all key sources with full details and my own summaries.
AI use log I filled Template_02 for this piece with real, specific notes.
Originality Every paragraph can be explained in my own simple words without looking at AI output.
Plagiarism I checked overlaps with tools or manual comparison and fixed or cited anything doubtful.
Citations Every fact, quote, statistic, and borrowed idea points back to a real human source.
AI disclosure I know the outlet’s AI policy and have a clear plan for where and how I will disclose my AI use.
High-risk areas I avoided AI for data creation and for wording of extremely sensitive sections.
Responsibility I accept that I am fully responsible for this text, not the AI tool.
Final read I read the entire piece aloud once and fixed any unclear or suspicious lines.
Practice

Practice sprint — 15-minute Ethics & AI check before submission

This sprint is your quick rehearsal. You can run it on a smaller blog post or a draft for a big outlet before you hit “submit.” Over time, it will feel normal and fast.

Minutes 0–4 — Map your AI use

Open your AI assist log and highlight the tasks where AI touched the text. Mark any high-risk areas like summaries, citations, or sensitive topics.

Minutes 4–8 — Check originality & sources

For each AI-touched section, read it side-by-side with your notes and sources. Confirm that wording is yours and every fact has a real, checked source.

Minutes 8–12 — Confirm policy fit

Re-read the outlet’s AI and plagiarism guidelines. Ensure your current draft and your disclosure plan match their exact rules.

Minutes 12–15 — Final ethical gut check

Ask yourself: “If an editor saw my AI logs, prompts, and notes, would I feel relaxed?” If the answer is yes, you are likely safe. If not, fix first, submit later.

Make it a habit: Run this sprint for your next three pieces in a row. After that, most steps will become automatic.
Appendix

Glossary — Ethics & AI terms you will meet often

Term Simple meaning for your notes
Generative AI Tools that can create new text, images, audio, or code based on patterns they learned from training data.
Hallucination Confident but wrong output from an AI tool, such as made-up facts, data, or references.
Similarity index A percentage score from a plagiarism tool showing how much your text overlaps with other documents it knows.
Undeclared AI use When AI helps in the writing or research process but the author does not mention it anywhere.
Provenance The traceable origin of ideas, data, and wording; “who thought of this first and where did it appear?”
Conflict of interest A situation where your personal, financial, or professional interests might affect how you present information.
Attribution The act of giving credit to the correct source of information, ideas, or quotations.
Authorship The role of people who take public responsibility for the content; usually cannot be taken by a tool or system.
Wrap

Your Ethics & AI SOP is now complete

With this Ethics & AI SOP you now have a calm, repeatable system for using AI in your writing without risking your reputation or your income. You log how you use AI, you keep clear records of your sources, you separate your own thinking from the tool’s suggestions, you credit the right humans, and you disclose AI help wherever the outlet expects it.

Each time you write for a new website, magazine, or journal, you can copy these templates, adjust the risk levels, and fill the logs. Over time, you will work faster and still stay fully aligned with strict plagiarism rules, citation rules, disclosure expectations, and acceptable AI assists. Your work will feel modern because you use AI wisely, and trustworthy because you remain fully responsible for every published word.

Leave a Comment

Your email address will not be published. Required fields are marked *

Shopping Cart
Scroll to Top