This practice-based reflection examines the use of large language models to rapidly produce a classroom demonstration in an undergraduate JavaScript course. It outlines a lightweight workflow that begins with clarifying outcomes, proceeds through prompting, instructor review and edits, and concludes with deployment. Reported benefits include faster preparation and clearer alignment to learning objectives. The paper also documents risks and mitigations, including cognitive load from “seductive details,” potential code inaccuracies, and shifts in perceived instructor credibility. It provides practical guardrails: bias audits of names, scenarios, and datasets; accessibility by default (semantic structure, keyboard operability, captions and transcripts, sufficient contrast, alt text); equitable access (low-bandwidth and Artificial Intelligence (AI)-free alternatives, avoidance of paywalled tools); and strict avoidance of student or sensitive data in third-party tools. Limitations include a single-course context and a reflective, non-experimental method. The goal is to offer actionable guidance for instructors who want to use AI for speed and flexibility while maintaining rigor, transparency, and student trust.
Tatiana GolechkovaNataliya Pletneva