{"hookSpecificOutput":{"hookEventName":"PostToolUse","additionalContext":"[learning-opportunities-auto] The user just committed code. Per the learning-opportunities skill, consider whether this is a good moment to offer a learning exercise. If the committed work involved new files, schema changes, architectural decisions, refactors, or unfamiliar patterns, ask the user (one short sentence) if they'd like a 10-15 minute exercise. Do not start the exercise until they confirm. If they decline, note it — no more offers this session."}}Examples?
> Generation effect: Accepting generated code and decreasing generating one's own code can skip the active processing that builds understanding.
Holy truth.
I want to learn Java spring, and probably let ai help me / quiz me. I will take a look into the skills for inspiration.
Conceptually, you should treat them as incremental software instead of magic you grab from others [1]
The killer feature is that coding harnesses tend to have SkillBuilder agent skills so creating them becomes very easy and you can evolve them.
I recommend you build your own for your particular pain points.
Very simple example [2] showing what another user mentioned around "evals" so that you can really achieve good enough correctness for your automation.
- [1] https://alexhans.github.io/posts/series/evals/building-agent...
- [2] https://alexhans.github.io/posts/series/evals/sketch-to-text...
also you can secure/lockdown tool calls better and make the agents tasks retryable, give it failure modes etc. (not if ur laptop dies during agent work its only god and the agent who know what happened to your code.. oh no wait. the agent needs to just spend 100k tokens to remember where it was (great way to spend ur money).
I know I sometimes get demotivated mid-way, but that also tells me it might not be worth the investment
> When you complete architectural work (new files, schema changes, refactors), Claude offers optional 10-15 minute learning exercises grounded in evidence-based learning science. The exercises use techniques like prediction, generation, retrieval practice, and spaced repetition to provide you with semi-worked examples from across your own project work.
Confusing name though.
I still don't see why AI would be mandatory. It's helpful, yes, but not mandatory.
Build your expertise, not just your projects.
This skill uses an adaptive "dynamic textbook" approach to help you integrate science-based expertise building exercises while doing agentic coding.
When you complete architectural work (new files, schema changes, refactors), Claude offers optional 10-15 minute learning exercises grounded in evidence-based learning science. The exercises use techniques like prediction, generation, retrieval practice, and spaced repetition to provide you with semi-worked examples from across your own project work.
Pairs well with Learning-Goal, a skill that guides you through semi-structured, interactive learning goal-setting using the technique of Mental Contrasting with Implementation Intentions (MCII), an evidence-based exercise.
This repository is also a Codex plugin marketplace. To add it from GitHub:
codex plugin marketplace add https://github.com/DrCatHicks/learning-opportunities.git
For local development from a checkout:
codex plugin marketplace add /path/to/learning-opportunities
The Codex marketplace includes:
learning-opportunities — the core learning exercise skilllearning-opportunities-auto — optional post-commit prompting hookorient — repo orientation generatorThis repository is a Claude Code plugin marketplace. To install:
Add the marketplace:
/plugin marketplace add https://github.com/DrCatHicks/learning-opportunities.git
Install the plugin:
/plugin install learning-opportunities@learning-opportunities
Restart Claude Code to activate
For more on Claude Code plugins, see the plugin documentation.
Linux and macOS users can install learning-opportunities-auto alongside learning-opportunities to have Claude automatically consider offering an exercise after each git commit. Windows users can use it too — a little setup is required.
If you're learning a new repo you can create an orientation.md file with suggested lessons using the orient skill. The orientation approach applies strategies from empirical research on program comprehension and codebase navigation — including how expert developers sample codebases strategically rather than reading exhaustively. See the orient bibliography for the full source list.
Install the orient plugin:
/plugin install orient@learning-opportunities
Navigate to the repo you want to orient yourself to, and call the orient skill either as default
/orient
Or using Simon Willison's showboat tool
/orient showboat
Then call learning-opportunities with the orient argument to get offered two lessons that will orient you to core features of the repo
/learning-opportunities orient
AI coding tools can create specific risks for decreasing users' engagement in learning by introducing inefficient learning habits. These effects can be anticipated based on several foundational science-backed learning principles:
The techniques in SKILL.md are designed to counteract these risks by reintroducing:
This skill interrupts that pattern by reminding you to consider investing in reflection and learning. It introduces a different "mode" of interacting with Claude, which will intentionally feel different than highly fluent and fast agentic coding in the service of helping you reflect and explore your generated work. This skill may be particularly useful for users who are experimenting with developing discrete projects with agentic coding that involve multiple unfamiliar languages, techniques, or architectural patterns.
After you complete significant work (which you can self-define, but I've suggested: creating new files or modules, database schema changes, architectural decisions or refactors, implementing unfamiliar patterns, any work where the user asked "why" questions during development. The key idea is to find a moment in your personal flow where a learning opportunity is most beneficial) Claude will ask:
"Would you like to do a quick learning exercise on [topic]? About 10-15 minutes."
If you accept, Claude runs you through an interactive exercise. A key design principle: Claude pauses and waits for your input rather than answering its own questions. This can feel frustrating, but this pushes against Claude's default to always provide the full answer and encourages your own mental effort and learning. You may encounter and need to design against Claude's defaults to provide the complete answer; please feel free let me know if you find gotchas or conflicts in your own workflow that you think will generalize to others so that I can incorporate in the Skill to improve this (e.g., I learned we needed to suppress prompt suggestions).
Two suppression conditions are currently suggested which can be adapted to your workflow needs. Claude will not prompt learning opportunities when:
The exercises draw from well-established findings in learning science, along with substantive research on typical learner misconceptions. Design choices also draw from multiple qualitative interviews with developers about what aspects of rapid agentic coding they find most frustrating, worrisome, or difficult when it comes to their own learning and development.
See PRINCIPLES.md for detailed explanations which can help you develop new exercise types or simply learn more about strategies to help your own learning.
If you're trying this skill with your team, you can layer on a lightweight pre/post measurement to make the experiment more visible and valued in your organization.
MEASURE-THIS.md is a companion playbook that includes:
The measures are free and open access under a CC-BY-SA 4.0 license. For the full set of measures and design notes, see the AI Skill Threat open access measures supplement and the Developer Thriving open access measures supplement.
This skill can be significantly refined and adapted. You might want to:
This skill was developed based on learning science and informed by multiple qualitative interviews with software development professionals about their concerns around agentic coding, as part of my open science empirical evidence about developer thriving and skill development in AI-assisted workflows. In my research with thousands of developers, I've also found that a strong value and commitment to learning predicts that developers feel less threat, worry and anxiety when imagining needing to adjust to agentic coding. Learning culture also associates with increases in team effectiveness overall, not just individual productivity.
I'd love to know if you enjoy this and what you learn! Sharing open science resources helps researchers like me create more things to help software teams. I always appreciate a shout-out or a share in public, which helps more people learn about the psychology of software teams. Get updates and access to more of the psychology of software teams at my newsletter: Fight for the Human
Learning-Opportunities:
Dr. Cat Hicks
I'm a psychological scientist studying software teams and technology work, an author, a public speaker, a research architect, and an empirical interventionist who builds radical research teams that put answers behind questions everyone is asking but few people are gathering real evidence about.
Orient:
Dr. Michael Mullarkey
I'm a machine learning engineer who used to be a therapist + social science researcher. I'm thinking a lot about how to leverage agentic AI to help people learn skills, see blendtutor for another example.
This work is licensed under a Creative Commons Attribution 4.0 International License.