Slide Deck Q&A Quality Assurance app

Upload a PDF slide deck (or just provide its URL), give it a citation, and the app will stream planning status while it builds a hierarchical JSON annotation with deck analysis, variable per-slide question budgets, and slide-level question sets.

Theory of Operation

The system operates by first extracting text and visual information from the uploaded PDF presentation deck. Large decks are chunked into contiguous, overlapping sliding windows (e.g., 8 slides per window with a 2-slide overlap). This preserves contextual awareness, allowing the system to track transitions and narrative flow across boundaries without hitting the generative model's context limits.

Following extraction, the system infers crucial instructional attributes for every slide, such as its modality (e.g., diagram, table, text) and its specific role in the presentation (e.g., mechanism, summary, agenda). These attributes strictly dictate the mix of generated question types (e.g., diagram labeling vs. multiple-choice) and the initial per-slide question budget, balancing instructional importance with evidence richness.

Finally, the system executes targeted slide-level question generation and performs deck-level reconciliation using precise 1-5 rubrics. Provisional question sets are evaluated on Coverage (from poor facts to strong representation of core concepts), Scaffolding (from random questions to coherent progression), and Fidelity (ensuring answers are derivable purely from the slide). The reconciliation step uses these scores to zero out redundancies, balance coverage across learning goals, and shape a cohesive final question distribution. Click here to see example academic papers on this technique.

One PDF only. Either upload a file or provide a URL below.
Status log
Idle

      
Final JSON
No result yet