Simtheory Academy launch: AI skills for all
The University of New England has launched Simtheory Academy, a dedicated training platform designed to equip staff and students with practical AI skills and safety knowledge.
Through UNE's official training partnership with the Simtheory platform - which provides the underlying technology for Madgwick - the Academy addresses an important institutional responsibility: helping the university community harness AI tools effectively while understanding their limitations and risks.
Open to the public, Simtheory Academy offers AI education to a wide audience - from UNE colleagues and prospective students to community members seeking to build practical skills and use AI tools safely.
Two foundational courses now available
AI Quick Wins is a hands-on 60-minute course built to provide immediate practical applications. Participants learn to personalise AI tools, analyse documents efficiently, and structure effective prompts using the Goal-Context-Output framework. The course covers creating specialised AI assistants and connecting them to real-world capabilities through Model Context Protocols (MCPs) - tools that link AI to search engines, image generation tools, and more.
"Participants can come in, learn how to prompt, create assistants, deal with MCPs, address hallucination, and effectively begin working with AI," LabNext70 Academic Director, Aaron Driver explains. By completion, participants will be guided through deploying their first custom assistant, learning how to give it capabilities, and developing skills to better tackle work challenges.
"Even if you are a regular user of AI, there might be lessons in there that plug learning gaps you might have inadvertently developed," Aaron notes.
AI Safety is a focused 15-minute course addressing the critical risks inherent in AI use.
It teaches users that AI functions as a ‘prediction machine’ that makes educated guesses, which can lead to inaccurate outputs.
Two primary risks are examined in detail: hallucination (the invention of false information) and sycophancy (AI's tendency to agree with and flatter users). Aaron explains that when these tendencies combine, they can create 'reinforcing loops' where users may become overconfident in inaccurate AI outputs.
The course also establishes the ‘Golden Rule’ of responsible AI use: users must critically evaluate and take accountability for AI outputs before applying them in their work. As Aaron emphasises, "AI is a tool - and you're the one in control of how you use it."
Three key safety techniques are taught: demanding verifiable sources for all AI-generated information, using MCPs to connect AI to live data, and uploading personal documents to provide verified information.
"We give users three techniques to help with this. The first is to always seek verifiable sources and avoid relying on stand-alone facts without verification," Aaron explains. “The second is to use grounding tools like MCPs to verify data. And the third is to provide your own source of truth. So, we show users how to drag and drop into chat and request that the AI bases responses only on the contents of attached documents and so on."
More training coming
The Simtheory Academy represents UNE's commitment to building AI capability responsibly across its community, helping users leverage these powerful tools while maintaining critical awareness of their limitations. Users also receive a certificate of completion to recognise their learning.
As AI tools and their applications continue to evolve, so too will the Academy. Aaron is currently developing an Advanced AI Principles course, with regular content updates planned to help the university community stay equipped with relevant, evolving knowledge and skills.