CSMI AI Policy

CSMI AI Policy

Students may use Artificial Intelligence (AI) tools to support writing, coding, literature triage, or exploratory analysis. In all cases, usage MUST be explicitly disclosed and properly cited (see below).

AI tools are assistants, not authors.

You MUST:

  • Verify every AI‑generated statement, equation, code fragment, figure description, and citation.

  • Cite the original scholarly / technical sources (AI output is never a primary source).

  • Keep a reproducibility log (tool, model/version, date, purpose, prompt summary).

  • Ensure the final wording, structure, and scientific claims are your own work.

You MUST NOT:

  • Submit undisclosed AI‑generated content.

  • Paste confidential, personal, embargoed, or exam material into external AI services.

  • Treat AI suggestions as authoritative without independent validation.

Responsibility remains entirely with the student/authors for correctness, originality, licensing, and academic integrity.

1. AI Assistance and References

You may add a dedicated section in your reports (recommended for any non-trivial use):

== AI Assistance Statement

Some parts of this work were prepared with the assistance of Artificial Intelligence (AI) tools.
The following tools were used (all outputs reviewed, validated, and adapted by the authors):

* ChatGPT (OpenAI, GPT‑4 family) – preliminary drafting and clarification of wording.
* GitHub Copilot (GitHub/Microsoft, 2025) – inline code completion for C++ and Python (non-original boilerplate acceleration only).
* Claude 3.5 Sonnet (Anthropic) – summarisation of background materials.
* Codestral 0.3 (Mistral AI) – code refactoring suggestions and templated scaffolding.
* Gemini 1.5 Pro (Google) – multimodal reasoning (text + figure interpretation assistance).

All AI-assisted content was independently checked for factual correctness, re-written where necessary, and integrated with original analysis. The authors take full responsibility for accuracy, originality, and proper attribution.

Or in LaTeX:

\section*{AI Assistance Statement}

Some parts of this work were prepared with the assistance of Artificial Intelligence (AI) tools. The following were used (all outputs verified, revised, and integrated responsibly by the authors):

\begin{itemize}
  \item ChatGPT (OpenAI, GPT-4/5 family) – linguistic refinement and short explanatory drafts.
  \item GitHub Copilot (GitHub/Microsoft, 2025) – inline code suggestions (non-novel boilerplate only) for C++/Python.
  \item Claude 3.5/4 Sonnet (Anthropic) – condensation of background notes.
  \item Codestral 0.3 (Mistral AI) – refactoring and structural code hints.
  \item Gemini 1.5 Pro (Google) – multimodal reasoning support.
\end{itemize}

All AI outputs were \textbf{verified, adapted, and corrected by the authors}. The authors accept full responsibility for the correctness, originality, and scholarly integrity of this work.

To cite AI tools properly, import the BibTeX file in the source tree (ai-tools.bib) into your reference manager (Zotero, JabRef, Overleaf, etc.) and cite them like any other @software entry. Always record an access date for hosted models.

Maintain a simple log (CSV / Markdown) with columns: date, tool, model/version, purpose, prompt summary / file scope. This supports reproducibility and clarifies authorship.

@software{openai_chatgpt_2025,
  author    = {OpenAI},
  title     = {ChatGPT},
  year      = {2025},
  version   = {GPT-4 family},
  url       = {https://chat.openai.com},
  urldate   = {2025-09-07},
  note      = {Conversational large language model assistant}
}

@software{anthropic_claude_2025,
  author    = {Anthropic},
  title     = {Claude 3.5 Sonnet},
  year      = {2025},
  version   = {Release August 2025},
  url       = {https://claude.ai},
  urldate   = {2025-09-07},
  note      = {Large language model assistant}
}

@software{mistral_codestral_2025,
  author    = {Mistral AI},
  title     = {Codestral},
  year      = {2025},
  version   = {0.3},
  url       = {https://mistral.ai},
  urldate   = {2025-09-07},
  note      = {Code-focused large language model}
}

@software{github_copilot_2025,
  author    = {GitHub},
  title     = {GitHub Copilot},
  year      = {2025},
  url       = {https://copilot.github.com},
  urldate   = {2025-09-07},
  note      = {AI pair-programming assistant}
}

@software{google_gemini_2025,
  author    = {Google},
  title     = {Gemini 1.5 Pro},
  year      = {2025},
  version   = {1.5 Pro},
  url       = {https://gemini.google.com},
  urldate   = {2025-09-07},
  note      = {Multimodal large language model}
}

Do not cite AI tools as evidence for scientific results. Always trace claims back to peer‑reviewed literature, standards, or authoritative datasets.

2. Enforcement & Consequences

Undisclosed or improper AI use is treated as an academic integrity issue.

1. Undisclosed AI Assistance (Detection Without Disclosure)

If a teacher or reviewer determines that AI was used and not disclosed:

  • Grade adjustment at instructor discretion, up to assigning a 0 for the affected work.

  • Mandatory meeting with the student to review this policy and usage expectations.

  • Written note (internal) may be placed in academic record for repeat monitoring (not a formal disciplinary mark on first occurrence unless severe).

2. Repeat or Aggravated Violations

Triggered by: repeated undisclosed use, falsified disclosure logs, or AI use after explicit prohibition in an assignment.

Possible additional actions:

  • Formal academic integrity report.

  • Grade penalty on course component (e.g., project / exam weight reduction).

  • Loss of eligibility for certain project topics or internship recommendations.

3. Severe Misconduct

Examples: submitting predominantly AI-generated work as original, generating or fabricating data/results, impersonation, or sharing confidential data with external AI systems.

Escalation may include:

  • Formal disciplinary procedure under university academic integrity code.

  • Course failure recommendation.

  • Internship host notification (if breach involves host material).

4. Good-Faith Errors

If the student disclosed AI use but the extent was unclear or formatting/log was incomplete:

  • Correction requested (revise disclosure / add missing log details).

  • No grade penalty if promptly corrected and no intent to mislead is evident.

5. Appeals

Students may contest a determination by:

  1. Requesting clarification from the instructor.

  2. Providing their reproducibility / prompt log.

  3. Demonstrating authorship by explaining or re-deriving questionable sections live.

6. Instructor Responsibilities

Instructors applying sanctions should:

  • Document rationale (what indicators suggested AI use: linguistic uniformity, improbable error patterns, code stylometry, etc.).

  • Offer the student an opportunity to respond.

  • Apply proportional sanctions consistent with this section.

  • Keep an incremental prompt/response log (date, tool, purpose summary).

  • Use AI for transformation and clarification, not for final drafting.

  • Regularly paraphrase and annotate AI suggestions while learning.

References