The WASC accreditation review process has always demanded something that standardized checklists cannot capture: the ability to hold an entire institution’s story in your mind at once: its stated goals, its lived practices, and the sometimes uncomfortable distance between the two. For years, that cognitive burden fell entirely on the WASC Chair and Visiting Committee. The evidence sat in folders, the self-study report sat in a binder, and the work of triangulation happened inside one person’s head over a series of days that often spanned into long evenings.
That model still works. But it no longer has to work alone.
Over the past year, I have integrated Google’s NotebookLM into my WASC review workflow, and the results have fundamentally changed how I prepare, analyze, and report before, during, and after my visits. This is not a post about replacing professional judgment: it is about amplifying it. What follows is the process I now use, phase by phase, to conduct reviews that are more evidence-grounded, more analytically rigorous, and ultimately more useful to the schools I serve.

The Human Lens Comes First
Before any AI is involved, I begin with a full, unassisted read of the self-study report. I annotate my copy, I flag themes that recur across sections, inconsistencies in the narrative, gaps in the evidence base, and practices that strike me as genuinely exemplary. This first pass is entirely about professional judgment: the kind of pattern recognition that comes from years inside schools, not from AI
This step is non-negotiable. No AI tool can replicate the contextual understanding a trained reviewer brings to a school’s story. NotebookLM is only as valuable as the questions I bring to it, and those questions are shaped by this initial human read.
Building the Evidence Architecture
On my second pass, I go deeper into the evidence. I download every artifact referenced in the self-study: curriculum maps, student work samples (with all personally identifiable information removed), assessment data, committee minutes, professional development records, and program evaluations. Data privacy is not a secondary consideration here, it is the foundation on which the entire AI workflow rests. No student PII enters NotebookLM under any circumstances.
Once the evidence is assembled, I upload the full set into NotebookLM and label each document clearly. Precision in labeling pays dividends later. A document titled “Math Curriculum Map 2023-2024” produces far more useful AI analysis than one titled “Doc7.” I do this for each piece of evidence in the report.
AI-Driven Triangulation
This is where the workflow shifts from preparation to analysis. With the evidence loaded, I begin prompting NotebookLM with questions designed to surface alignment, misalignment, and patterns across the full document set. I ask questions like: “Compare the goals stated in the self-study with the evidence provided in the action plan, report, and their data. Where do you see gaps?” or “Based on the committee minutes and professional development records, how consistently is the school mission embedded in practice?” NotebookLM then cross-references claims against evidence, identifies thematic patterns, and flags contradictions that a reviewer might miss when processing hundreds of pages sequentially.
The output is not a summary. It is triangulation. As a researcher, triangulation is extremely important as it ties together the quantitative and qualitative findings. I have seen it surface findings like: the self-study emphasizes this form of instruction as a school-wide practice, but the data shows limited evidence of it happening in classrooms. Or: equity is a stated priority in every section, but the committee minutes reveal that action items tied to achievement gaps are consistently deferred. These are the kinds of findings that make a review genuinely useful to a school’s improvement and they emerge from evidence, not inference.
Generating Targeted Questions for Each Stakeholder Group
Once the AI-assisted analysis is complete, I use NotebookLM to generate the specific questions I will bring to each stakeholder group during the site visit. These are not generic accreditation prompts drawn from WASC. They are precision tools derived directly from the triangulation findings and they are differentiated by audience (i.e., teachers, classified staff, administration, etc.), because what a teacher can tell you and what a district administrator can tell you are fundamentally different things.
Teachers
Teachers are the closest witnesses to the gap between policy and practice. The questions I bring to teacher interviews and focus groups are designed to surface what is actually happening in classrooms, not what the self-study describes. If the evidence triangulation reveals a misalignment between the school’s stated commitment to an instructional practice, I ask teachers directly: “When you design a major unit, what does how are you and your team incorporating this instructional practice and how does that connect to the school’s commitment to improving student outcomes but also its mission and vision?” The goal is not to catch anyone in a contradiction. It is to understand the instructional reality with enough specificity to make the final report genuinely useful.
Counselors
Counselors often hold a school’s equity story in ways that neither the self-study nor the curriculum maps can fully capture. They see which students are not accessing advanced coursework, which students and families are disengaged from the support systems the school believes are working, and where the official narrative of student support diverges from the daily experience of students navigating the system. If the triangulation reveals that equity goals are stated prominently but action items in committee minutes are routinely deferred, counselors are the right stakeholders to ask: “How does the school identify students who are falling through the gaps in the support framework and what does the intervention timeline actually look like once a student is flagged?”
Classified Employees
Classified employees are consistently the most underutilized source of institutional knowledge in any accreditation review. Paraeducators, office staff, campus aides, and custodial teams interact with students and families in contexts that certificated staff rarely see. If the self-study emphasizes school culture and belonging as strengths, I ask classified staff: “When a student or family comes to you with a concern or a need, what does that process look like and do you feel equipped to connect them to the right support?” The answers to that question frequently reveal whether a school’s stated culture of belonging is structural or merely aspirational.
District Leadership
District leadership questions operate at a different altitude. The triangulation findings that matter most at this level are the ones that reveal systemic constraints — resource allocation gaps, policy-practice disconnects, and the degree to which site-level improvement efforts are aligned with or undermined by district priorities. If the evidence suggests that the school’s professional development records show strong commitment to instructional coaching but the time structures necessary to make coaching sustainable are absent, the question for district leadership is direct: “What are the structural supports and resource commitments the district has made to ensure that site-level instructional coaching can be sustained beyond this accreditation cycle?”
The power of this differentiated approach is that it transforms the site visit from a series of parallel conversations into a layered investigation. Each stakeholder group contributes a different angle of vision on the same institutional story. NotebookLM helps me map those angles before I arrive so that by the end of the visit, I am not assembling a picture from scratch. I am confirming, complicating, and deepening an analysis that was already well underway. That is what separates a review that documents a school from a review that understands one. And understanding, not documentation, is what drives real improvement.
Preparing the Visiting Committee
Before the site visit, I compile a concise pre-visit summary for the full review committee using NotebookLM’s analysis as the foundation. This document distills the key strengths, the primary areas for inquiry, and the specific questions we should carry into their assigned observations and interviews.
In recent reviews, I have also created a short audio overview using NotebookLM’s podcast feature — a three-to-five-minute summary that committee members can listen to in transit. Additionally, I have provided a brief slideshow that does this as well. Ultimately, the goal is simple: every reviewer arrives on-site with a shared analytical framework, not just a shared schedule. Consistency in preparation produces consistency in observation.
Real-Time Integration During the Visit
During the interviews and observations, I take structured observation notes in real time and upload each set into NotebookLM as a new document throughout the day. The AI then compares these emerging observations against the pre-visit analysis, flags inconsistencies or unexpected confirmations, and begins suggesting follow-up questions for afternoon interviews based on what morning observations revealed.
For example: if a principal’s opening remarks emphasize inclusive instructional practices but classroom observations across the morning show limited use of culturally responsive materials, NotebookLM surfaces that contradiction immediately, before the afternoon debrief, not after the visit concludes. That real-time feedback loop changes the nature of on-site inquiry from retrospective to adaptive.
The Final Report as Human-AI Collaboration
After the visit, the report-writing process draws on four streams simultaneously: my professional judgment, NotebookLM’s triangulated analysis, the on-site evidence gathered during the visit, and the committee’s collective insights from observations and interviews. The result is a report that is evidence-based at every claim, thematically organized rather than criterion-by-criterion, and genuinely actionable for the school’s leadership.
The writing remains mine. The AI does not draft the report: it organizes the evidence that informs it. That distinction is important. Accreditation reports carry professional authority and ethical weight. The human reviewer is accountable for every judgment rendered, and that accountability requires human authorship.
Why This Works for WASC Acrreditation
The argument for AI integration in accreditation is not about efficiency, though efficiency is a real benefit. It is about analytical depth. A well-prompted NotebookLM session can process five hundred pages of evidence and surface cross-document patterns in minutes, patterns that a human reviewer would eventually find, but perhaps not before the site visit, and perhaps not with the precision needed to generate the right questions.
The reviewer who uses this workflow does not do less work. They do different work: less time on raw information management, more time on judgment, interpretation, and the relational intelligence that no model can replicate. The best WASC reviews are ultimately acts of partnership between a review team and a school community. They require trust, candor, and a genuine commitment to improvement. NotebookLM does not build that trust. It frees the chair and reviewing committee to invest more fully in the work that does.
If you are a WASC reviewer or accreditation professional, I encourage you to experiment with this workflow. Read the self-study first, on your own, with your full professional attention. Then let the AI do what it does well process volume, surface patterns, and generate targeted questions so that you can do what you do well: lead a review that actually helps a school grow.
Last, if you are a school writing your own WASC report, this same process can be done in reverse if you are writing your WASC report. The same principles apply, but the entire process will be in reverse. This is another reason why NotebookLM is such a powerful tool.