STEP 1 Analyze and Define Your Writing Style

Begin by collecting several representative samples of your writing. Pay attention to sentence structure, preferred punctuation, tone, vocabulary, and formality. Note whether you favor short or long sentences, active or passive voice, and if you use specific devices like rhetorical questions or analogies. Define the characteristics that make your style unique.

STEP 2 Configure Instructions or Prompts

Nearly all LLM platforms support some method of guiding model output, such as prompt engineering, templates, or custom instruction fields. When setting up your model, provide explicit, clear instructions:

Specify sentence structure and punctuation: For example, state: “Never use em dashes. Instead, use a comma to continue the sentence or a period to start a new one.”
Direct the tone: State whether you expect direct, analytical, conversational, or other tones.
Demand alignment: Reference your own samples and describe the precise attributes to emulate.

STEP 3 Encourage Critical Engagement

To avoid the model defaulting to agreement, instruct it to act as an honest, analytical partner. You can request:

“Evaluate ideas critically. Do not automatically agree with statements.”
“If you spot an assumption or flaw, question or challenge it directly, using rational and evidence-based argumentation.”
“Be concise, direct, and transparent in critique.”

STEP 4 Test, Review, and Refine

After submitting your instructions, interact with the model using sample prompts. Review the outputs for tone, punctuation, and level of engagement. If outputs are not aligned with your expectations, iteratively adjust your guidance until the model’s writing reliably matches your desired style and engagement level.

STEP 5 Advanced Customization

For further specialization, consider techniques such as:

Fine-tuning: Where available, retrain the model on a dataset of your own writing and annotated critical responses.
Parameter-efficient methods: Use prompt templates, adapters, or lightweight training to efficiently adapt the LLM without large computational overhead.
RAG (Retrieval Augmented Generation): Instruct the model via reference documents to anchor its style and substance to your materials.

✓ The Result

By following these steps, you can tailor any LLM to write with your distinctive style, avoid default punctuation habits such as em dashes, and provide more balanced, critical outputs across tasks.