StoryDesk Creator Guidelines & Standards of Use

Effective upon account creation · Version 2.0 · March 2026

These guidelines apply to all StoryDesk accounts.

StoryDesk exists to help you publish content that sounds like you, represents your expertise, and lands with your audience. StoryDesk is here to help you do that well. These rules define what 'well' means, and what falls outside the bounds of acceptable use on this platform.

By using StoryDesk, you agree to these rules.

How These Guidelines Work

StoryDesk governs its AI systems in line with recognised standards such as the OECD AI Principles and the NIST AI Risk Management Framework. These Creator Guidelines & Standards explain how those commitments show up in day-to-day use of the platform.

Disclaimer: These Rules & Standards describe how StoryDesk is intended to operate. They do not constitute legal advice or create any lawyer–client, fiduciary or advisory relationship between you and StoryDesk. You remain solely responsible for your own legal and compliance obligations.

For details on how StoryDesk stores and processes your content, see our Privacy Policy and Terms of Service. These Rules sit alongside those documents and do not replace them.

1. You Own What You Publish

StoryDesk AI generates content based on source material you provide or our aggregated trusted sources. You review it. You publish it. That makes you the author.

Source Material

You are responsible for the accuracy of everything you publish from the platform.

Review

You are responsible for reviewing all output before publishing.

Publication

You are responsible for the published post — regardless of which tool produced the draft.

2. What You Can't Create on StoryDesk

These uses are never allowed on StoryDesk.

2.1 Harmful Language

Note on subtle targeting.

We recognise that harmful content is not always obvious. Language that appears neutral at first pass can be constructed to subtly demean, exclude, or disadvantage a specific group. StoryDesk monitors for these patterns. See Section 5 for how we handle this.

2.2 Misinformation and Fabrication

2.3 Impersonation and Deception

2.4 Manipulation

2.5 Copyright Infringement

3. Content Standards

StoryDesk holds our output to a defined quality standard. These aren't suggestions — the platform enforces them automatically.

Source Grounding

Our AI-generated content is grounded in verifiable source material. Our proprietary Fact Check and Trust Score systems provide users additional information. It is up to you to apply these to your content.

Language Filtering

Prohibited language patterns — including specific banned words — are filtered at output level. See the full list in the StoryDesk AI Operating System.

4. Disclosure

We're proud of the work you create on StoryDesk and want your audience to value it, too. While casual or internal use doesn't require a disclaimer, we recommend disclosure for published editorial or professional advice where AI has materially shaped the final output.

5. How StoryDesk Monitors for Harmful Content

StoryDesk's AI governance layer applies quality and ethics rules to every draft. Before any output is returned, it checks the content against every rule in this document — including the ethical ones. This happens on every generation, every session.

What StoryDesk checks for

Prohibited Language

Prohibited language patterns — explicit and structural.

AI Clichés

Obvious AI-generated phrasing such as filler words, hollow superlatives, and generic motivational language.

Impersonation Signals

Named individuals in contexts that misrepresent their statements or identity.

Subtle Targeting

Language that appears neutral but is constructed to demean, exclude, or disadvantage a specific group. This includes framing, omission, coded language, and repeated negative association with a group identifier.

On subtle targeting specifically.

We monitor for: coded language with known discriminatory use, disproportionate negative framing of named groups, patterns of omission that erase a group's contribution or presence, and repeated use of language that correlates with documented bias in AI training data.

Where our system identifies these patterns, it will not refuse the request. It will redirect — offering a reframed version that preserves the legitimate content of the post without the harmful construction.

What StoryDesk does when it finds a problem

Not this

Abrupt refusal with no alternative.

Unexplained rejection.

Lecturing or moralising.

Leaving the user without a path forward.

This

A governed alternative — same intent, different construction.

A brief, direct explanation of what was redirected and why.

An elevated output that serves the user's legitimate goal.

StoryDesk's voice throughout — warm, direct, never condescending.

6. What Happens If You Break These Rules

If you violate Sections 2 or 4, your account may be suspended immediately — without needing to go through a content quality dispute first.

Ethics and output quality are assessed separately. A post that meets quality standards can still be an ethical violation (e.g., a technically well-written post that targets a protected group).

StoryDesk will cooperate with legal processes in cases involving proven harm.

If your account is suspended for an ethical violation, a quality compliance appeal will not reinstate it.