AI-Policy and Student Self-Efficacy

Artificial Intelligence (AI) is reshaping how we learn, work, and create knowledge. At New Study, we actively engage with AI but we do so consciously, critically, and responsibly. This policy is designed to support you in attaining ethical AI-literacy.

This page explains: 

  • how AI may and may not be used in New Study, 
  • why student self-efficacy is central to learning, especially in an AI-rich environment, 
  • how both are connected to academic integrity, exam success, and long-term competence.

The complete, binding policies and workbooks are linked below and updated regularly.

Our Core Principle 

AI may support learning - it must never replace learning. 

AI is an assistant, not an author. If a student cannot explain, justify, or adapt their work independently, then learning has not taken place, regardless of how polished the output looks.

Why This Policy Exists

From real teaching experience and current research, we observe a clear pattern: 

  • AI can strengthen learning when used for reflection, feedback, and clarification. 
  • AI can undermine learning when it replaces effort, struggle, or reasoning. 
  • The greatest risk is the illusion of competence: students feel confident while their actual understanding erodes. 

This is not a hypothetical risk. It directly affects: 

  • exam performance (in mathematics, algorithms, coding, Software Engineering, etc.), 
  • ability to work independently,
  • confidence under pressure, 
  • long-term professional readiness. 

This policy exists to protect learning, not to restrict innovation.

Learning Requires Effort by Design 

Deep learning is not smooth or effortless. Decades of cognitive science show that productive struggle, mistakes, and revision are essential for durable understanding and transfer. When effort is removed too early, for example through instant AI-generated solutions, students may feel fluent without truly understanding. This leads to fragile knowledge that collapses in exams or real-world situations. At New Study, difficulty in the program is a necessary part of learning well.

AI and the Emerging Split Between Learners 

In AI-rich learning environments, students tend to follow one of two paths: 

AI as a Scaffold (Encouraged when transparent) 

  • clarifying concepts after attempting a task, 
  • receiving feedback on one’s own work, 
  • comparing alternative approaches, 
  • reflecting on reasoning and mistakes. 

AI as a Shortcut (Not acceptable unless explicitly permitted) 

  • generating solutions instead of constructing them, 
  • bypassing practice and struggle, 
  • masking gaps in understanding with polished output. 

Research shows these paths lead to very different outcomes often not observed until year 2/3 or after. The policy exists to keep students on the first path.

What This Means in Practice 

Demonstrable Ownership 

Any acceptable AI use is conditional on demonstrable ownership. 

Students must be able to: 

  • explain their reasoning, 
  • justify design and solution choices, 
  • critique alternatives (including AI-generated ones), 
  • adapt or modify their work independently. 

If this cannot be demonstrated, the work may be treated as not the student’s own, even if AI use was declared. 

Transparency Is Required 

Where AI is allowed, students must be transparent about: 

  • which tools were used, 
  • how they were used, 
  • what was changed or integrated, 
  • what was learned. 

Course Rules Always Apply 

Individual courses may impose stricter rules, including no AI at all, especially in early semesters or diagnostic contexts. If a course rule is stricter than the general policy, the course rule applies. When in doubt: assume AI is not allowed and ask.

AI Use, Self-Efficacy, and Student Success 

AI changes how easy it is to produce output but it does not build the psychological foundations of learning. What determines long-term success is self-efficacy: the belief that I can succeed through my own actions. Students with strong self-efficacy

  • persist when things are difficult, 
  • recover from setbacks, 
  • regulate their learning strategically, 
  • use tools (including AI) without becoming dependent. Students with weak self-efficacy: 
  • avoid difficulty, 
  • outsource thinking, 
  • overestimate their understanding, 
  • struggle in exams and independent work. 

Responsible AI use and self-efficacy are therefore interdependent.

How New Study Actively Supports This 

To support responsible AI use and genuine learning, New Study combines: 

  1. A clear AI policy defining boundaries, expectations, and consequences. 
  2. A Self-Efficacy Workbook for Students that helps students reflect on: 
    1. what they can control, 
    2. how they respond to difficulty, 
    3. how AI affects their learning behavior. 
  3. A Feed-Forward Framework that shifts focus from retrospective satisfaction (“Did I like the course?”) to future-oriented learning (“What helped me learn? What didn’t?”).

Together, these form a coherent system that makes learning processes visible, even when AI is involved.

Authoritative Documents & Versions 

The following documents define the binding framework for New Study. They are versioned and updated as research and practice evolve. 

Students are expected to be familiar with these documents.

A Shared Learning Agreement 

This framework exists to protect: 

  • your ability to think independently, 
  • your confidence under pressure, 
  • your success in exams, 
  • your readiness for professional life. 

By studying at New Study, students agree to: 

  • engage with AI thoughtfully and transparently, 
  • take responsibility for their learning, 
  • develop skills that still work when AI is unavailable.