Why We're Starting Fresh
This is not a follow-up to the original surveys. The original surveys had fundamental design problems — anonymity, scattered scope, vague questions — that cannot be fixed by tweaking the wording. Rather than iterate on a flawed foundation, we're building a new instrument from scratch with a clear purpose and a defined strategy for what we do with the data.
The NHA App has three core communication functions that parents and staff use daily: Direct Messaging, Posts and Announcements, and Notifications. Every question in the new survey maps to one of these three functions. If a question doesn't help us improve one of these three areas, it doesn't belong in the survey.
What's Different
| Aspect | Old Approach | New Approach |
|---|---|---|
| Identity | Anonymous Google Form | Authenticated via NHA account |
| Follow-up | Impossible — no way to contact respondents | Direct follow-up with any respondent |
| Scope | Everything — onboarding, SPARK, payments, satisfaction, readiness | Three core functions: Messaging, Posts, Notifications |
| Audience | One survey per audience, same questions for everyone | Separate versions for parents and staff with role-appropriate questions |
| Question design | General satisfaction scales and open-ended prompts | Behavior-specific questions that identify specific problems |
| Segmentation | None — all responses aggregated | By school, device, role, and usage patterns |
| Action path | Vague data → unclear priorities | Every question maps to a development decision |
| Distribution | Blast to everyone — hope for responses | Usage-data targeting — only survey people with meaningful experience |
| Skip logic | Conditional branching for "have you used X?" gating questions | None needed — every respondent is pre-qualified by usage data |
Design Principles
The Theory
The NHA App is replacing SchoolConnect for a population of parents who are, in many cases, not deeply engaged with school technology. Many are first-time app users for school communication. Some have limited English proficiency. Many access the app exclusively on phones with varying quality of internet connectivity.
This population is unlikely to fill out a long survey, unlikely to write detailed open-ended responses, and unlikely to understand what we're asking if the questions use product management jargon. The survey must be short, concrete, and written in plain language.
At the same time, staff are managing a transition while still running their classrooms. Their frustrations are different — they're concerned about whether parents are seeing their messages, whether the tools work reliably enough to depend on, and whether the support pipeline actually resolves issues. Staff questions need to reflect the operational reality of being the sender, not the receiver.
The core hypothesis: The three communication functions (messaging, posts, notifications) are what determine whether parents stay engaged with the app or abandon it. If messaging works and notifications are reliable, parents will use the app. If posts are easy to read and attachments open correctly, the app becomes their primary source of school information. Everything else — SPARK, payments, calendar — is secondary to getting the communication layer right.
Removing anonymity is not about surveillance — it's about service. When a parent tells us "I never receive notifications," we need to be able to check their device type, notification settings, and account status. We need to be able to reach back out and say: "We found the issue — your push notifications were disabled at the OS level. Here's how to fix it." Or: "We identified a bug affecting Android users at your school. It's fixed now."
Anonymous feedback turns every report into a mystery we can't solve. Authenticated feedback turns it into a support ticket we can close.
Usage-Data-Driven Targeting
The biggest change from the original approach: we're not surveying everyone. We're reviewing actual usage data from the NHA App to identify the parents and staff whose opinions are worth having — people who have used the app enough to have informed opinions about what works and what doesn't.
Our analysis of usage data across the three pilot schools identified approximately 200 parents and 100 staff members with meaningful engagement — defined by a combination of messages sent, messages received, posts viewed, and posts created. These aren't casual users who opened the app once. They're people who have genuinely tried to use messaging, read posts, and depend on notifications.
We already know what they've used before we ask. The old survey needed gating questions like "Have you used the direct messaging feature?" to route respondents through skip logic. That's gone. If someone is receiving this survey, it's because we can see in the data that they've sent messages, viewed posts, or both. Every question can go straight to quality and issues — no screening, no branching, no wasted slides.
This also means we can cross-reference survey responses with actual behavior. If a parent says "I never receive notifications" and our data shows they've viewed 26 posts, that tells us notifications might not be the path they're using — they're checking the app manually. That's a different kind of insight than anonymous survey data could ever provide.
The practical impact on survey design:
- No gating questions. "Have you used messaging?" is gone. We selected them because they have.
- No skip logic. Every respondent gets every question. No conditional branching to manage or test.
- Shorter surveys. 8 questions each instead of 10 — removing screening questions made space to get tighter.
- Higher response quality. Every answer comes from someone with firsthand experience, not someone guessing.
Proposed Questions: Parent Survey
8 questions. Every respondent has been identified as an active user through usage data — so we skip straight to issues and quality. No screening questions, no skip logic. Organized around the three core functions, with one open-ended question at the end.
Proposed Questions: Staff Survey
8 questions. The staff survey mirrors the same three-function structure but asks from the perspective of the sender. Like the parent survey, usage data confirms these staff members actively use messaging and posts — so we skip frequency questions and go straight to effectiveness and issues.
What We Expect to Learn
- Notification reliability by device. By correlating Q1 (device) with Q4 (notification delivery), we'll know whether notification failures are platform-specific — an Android problem, an iOS problem, or a server-side problem.
- Specific bug categories at scale. The issue checklists (Q2, Q3 for parents; Q2, Q4 for staff) produce structured data we can count, prioritize, and track over time — unlike open-ended "what should we improve?" responses.
- Staff confidence as a leading indicator. Staff Q7 (delivery confidence) is the single most important number. If staff don't trust the app to deliver their messages, they won't use it — regardless of how good the features are.
- Direction of travel. Parent Q7 (comparison to before) gives us a directional signal: are we moving forward or backward? A parent who says "worse" despite heavy usage is our highest-priority follow-up.
- Usage data cross-reference. Because we know each respondent's actual usage before they answer, we can validate their self-reports. A parent who says "I rarely get notifications" but has viewed 30 posts tells us they're checking the app manually — a different problem than broken push delivery.
- Individual follow-up opportunities. Every respondent who checks "message appeared to send but wasn't received" or "I set up notifications but they don't come through" becomes a support case we can proactively resolve — turning survey frustration into a positive interaction.
After this survey closes, we should be able to say: "Here are the top 3 problems, ranked by how many people are affected. Here's which devices and schools are most impacted. And here are the specific users we're following up with." If we can't make that statement with the data we collect, the survey failed — regardless of how many responses we get.