Dear friends,
It’s the end of January.
If you’re still trying to keep your New Year plans alive , even imperfectly, you’re doing better than you think. For many of us, the real challenge isn’t motivation. It’s learning effectively in a world where information is everywhere and attention is limited.
Rather than sharing goals or resolutions, I want to document one practical learning case study — how I use AI tools to support technical refreshment, while keeping professional judgement, experience, and boundaries firmly in place.
This is not about shortcuts.
It’s about learning structure.
The learning workflow
The diagram below shows the learning workflow described in this case study — how I combine trusted material with AI tools to refresh technical understanding and create reusable learning outputs.
AI use note
This case study describes how I use AI tools to support technical refreshment and structured learning. All inputs are publicly available or deliberately sanitized materials. AI is not used to replace detailed training hands on practice, or professional judgment.
The context
I often need to refresh or uplift my understanding of dry, technical topics — such as accounting standards, regulator guidance, or industry practice. For myself and for my team as well.
The difficulty is rarely access to information.
More often, the information is:
- scattered across multiple sources
- repetitive or overlapping
- sometimes inconsistent
- time-consuming to digest end to end
What I needed was a way to:
- get a high-level technical refresh
- surface confusing or judgement-heavy areas
- and do it without using any confidential or client data
Step 1: Assemble safe learning inputs
I start by compiling only public or sanitised learning material.
In this case, I worked with around 18 resources, including:
- accounting standards
- regulator guidance
- industry practice notes
- publicly available company disclosures
- published examples
I deliberately avoid:
- client data
- internal workpapers
- confidential financial information
The purpose of this step is simple:
build a reliable and safe learning base.
Step 2: Find the facts (NotebookLM)
I upload all learning material into NotebookLM.
At this stage, I use the tool strictly for fact-finding and structuring, not decision-making.
Typical uses include:
- extracting key facts from each document
- comparing interpretations across sources
- identifying recurring themes
- highlighting areas that are commonly confusing
I’m not trying to reach conclusions yet.
I’m trying to create clarity.
The output from this step is:
- structured notes
- fact-based summaries
- a clearer view of what I understand versus what needs more work
This is where most of the time efficiency comes from.
Step 3: Stress-test understanding (ChatGPT)
Once the facts are structured, I switch tools.
I use ChatGPT as a thinking partner, not an authority.
This is where I:
- challenge assumptions
- ask “why”, “what if”, or “does this really hold?”
- re-explain concepts in plain language
- test whether my interpretation makes sense for my purpose
If something feels weak or inconsistent, I go back to the source material.
This step is about pressure-testing my own thinking.
Step 4: Final synthesis & outputs (back to NotebookLM)
After stress-testing, I bring the validated thinking back into NotebookLM and create final learning outputs, such as:
- concise summaries
- structured learning notes
- reusable checklists
- short learning packs
In some cases, I also convert the material into a podcast-style explanation. This helps reinforce understanding and makes revision easier, especially for complex or technical topics.
These outputs are primarily for:
- my own learning
- internal knowledge refresh
- revisiting later when needed
Step 5: Human judgement (non-negotiable)
Before relying on anything, I always ask myself:
Can I explain this clearly without the tool?
If the answer is no, I go back a step.
This workflow supports fast technical refreshment and uplift, but it does not replace:
- detailed training
- hands-on practice
- or years of professional experience
Judgement still sits with the human.
What this approach is — and isn’t
What it is:
- a structured way to refresh technical knowledge
- useful for complex or dry topics
- effective for identifying key takeaways and confusing areas
What it isn’t:
- a shortcut to expertise
- a replacement for experience
- a substitute for professional judgement
AI supports the process.
Experience still owns the outcome.
Why I’m sharing this
We’re no longer short on information.
We’re short on good learning processes.
This is just one approach that worked for me — safely, carefully, and responsibly.
Next time, I may try something different and see what holds up.
If you’d like to listen to one of the accounting-standard explanations I created using this approach, the link is included below.
Property JV & Equity Accounting — Training Hub
Lydia
See you next time.
P.S. If this letter found you at just the right moment, I’d love to hear about it. Join my weekly letter list and let’s figure it out together — one AI-shaped step at a time. Join the weekly letter list.
☕💌 If you’d like to fuel my next cup of coffee and keep this journey going, you can:
- 💳 Buy me a coffee via Stripe
- 🅿️ Buy me a coffee via PayPal
Your support keeps the ideas flowing and the coffee brewing ✨