Mastering Feedback: 43 Survey Questions to Improve Customer Loyalty (2026)
Customer loyalty isn’t guaranteed—PwC notes that 32% of customers will leave a brand they love after one bad experience. To improve, you need to ask the right questions at the right time and act on the answers. The 43 questions below are organized so you can pick by survey type (product, support, NPS, CES) and keep each survey short and goal-focused. This guide gives 43 survey questions in four groups: product satisfaction (e.g. “On a scale of 1–10, how satisfied are you with [product]?”, “Which feature do you use most?”, “What is one upgrade you’d like?”); customer service (e.g. “How would you rate the knowledge of the representative?”, “Was the issue resolved on first contact?”); NPS (e.g. “How likely are you to recommend us?”, “What is the primary reason for your score?”); and CES (e.g. “It was easy to find the information I needed,” “The checkout was quick and straightforward”). Plus 7 do’s and don’ts: don’t rush the draft; do allow skips; don’t use leading questions; do time by context; don’t act like a machine (use conversational UI); do one clear goal per survey; do keep under 3 minutes and use conditional logic. Close the loop with workflows (e.g. low score → Slack alert). For survey design, see high-impact surveys: 12 best practices and NPS survey best practices. For CSAT focus, see 12 customer satisfaction questions.
From feedback to action
Gathering data is step one. Acting on it—alerting the team on low scores, following up with detractors, fixing issues and telling customers—turns survey questions into loyalty. Use a form builder with conditional logic and webhooks (e.g. AntForms) so low scores trigger real-time follow-up. This guide lists 43 questions in four groups (product, support, NPS, CES), expands the do’s and don’ts, and shows how to close the loop. For survey design and length, see high-impact surveys: 12 best practices and how to conduct an online survey in 7 steps. For satisfaction focus, see 12 customer satisfaction questions.
The 43 questions in four groups
You do not need to use all 43 in one survey. Pick one goal per survey (e.g. product satisfaction, or support, or NPS) and select 5–10 questions that serve it. How to choose 5–10 questions: For a product survey, start with Q1 (satisfaction scale), add Q2 or Q3 (feature use or desired upgrade), and Q6 (reason for score) with conditional logic for low scores. For support, use Q13–Q15 and Q21, plus one open-ended (Q20 or Q23). For NPS, always Q24 and Q25 (or Q26); add Q27 or Q30 if you want segmentation. For CES, pick 2–4 from Q34–Q43 that match the task (e.g. checkout: Q35, Q38, Q40). Sample post-purchase flow: Q1 (satisfaction) → if 1–6, show Q6 (reason); if 7–10, show Q3 (one upgrade). Then Q24 (NPS) and Q25 (reason for score). That is 4–6 questions and under 2 minutes. Use conditional logic to show follow-ups (e.g. “What is the main reason for your score?” only after NPS). Keep the path under 3 minutes. Below, questions are grouped by theme; each group has more than you would use in a single survey so you can mix and match.
Product satisfaction (12 questions)
- On a scale of 1–10, how satisfied are you with [product]?
- Which feature do you use most?
- What is one upgrade or improvement you would like?
- How well does [product] meet your expectations? (1–5 or 1–10)
- How likely are you to continue using [product] in the next 6 months?
- What is the main reason for your satisfaction score? (open-ended)
- Which feature is most important to you?
- How would you describe [product] to a colleague? (open-ended)
- What almost stopped you from choosing [product]? (open-ended)
- How does [product] compare to alternatives you have used? (much worse – much better)
- What would make you more satisfied with [product]? (open-ended)
- How often do you use [product]? (daily / weekly / monthly / rarely)
Use 2–4 of these per product survey; add conditional logic so low-satisfaction respondents see a follow-up open-ended (e.g. “What is the main reason?”). When to use: After first use, after a key milestone (e.g. 30 days), or after a product update. Do not send product satisfaction and NPS in the same survey if that would push you over 3 minutes; split into two or use NPS as the primary and one product question. For product feedback in depth, see 10 essential product survey questions.
Customer service (11 questions)
- How would you rate the knowledge of the representative? (1–5)
- Was your issue resolved on first contact? (Yes / No)
- How easy was it to get in touch with support? (1–5)
- How would you rate the clarity of the solution provided? (1–5)
- How long did it take to resolve your issue? (faster than expected / as expected / longer than expected)
- Did the representative show empathy and understanding? (Yes / No or scale)
- Would you contact support again if you had another issue? (Yes / No)
- What could we have done better? (open-ended)
- How would you rate the overall support experience? (1–10)
- Was the support channel (chat, email, phone) right for your issue? (Yes / No)
- Is there anything else you would like to add about your support experience? (open-ended)
Use 3–5 per post-support survey; branch “Was the issue resolved?” so “No” gets a follow-up. When to use: Right after a support ticket is closed (email or in-app). Short window (e.g. within 24–48 hours) gets better response. If you have many support contacts, sample or use a rotating subset so you do not over-survey. For CSAT question sets, see 12 customer satisfaction questions.
NPS and recommendation (10 questions)
- How likely are you to recommend us to a friend or colleague? (0–10, NPS scale)
- What is the primary reason for your score? (open-ended)
- What would need to change for you to give a higher score? (open-ended)
- Which of these best describes why you gave that score? (multiple choice: value, quality, support, ease of use, other)
- Have you recommended us in the past 6 months? (Yes / No)
- How would you describe us to someone who has never used us? (open-ended)
- What do you value most about our [product/service]? (open-ended)
- What is one thing we could do to make you more likely to recommend us? (open-ended)
- How do we compare to [competitor] in your view? (open-ended or scale)
- Would you be willing to provide a testimonial or case study? (Yes / No)
Always pair NPS (Q24) with at least one open-ended (Q25 or Q26) so you have reasons, not just a number. Segment by promoter (9–10), passive (7–8), detractor (0–6) and close the loop with detractors first. When to use: Post-purchase (e.g. after 30 days), post-onboarding, or quarterly. One NPS survey per customer per quarter is often enough; more can cause fatigue. For NPS design, see NPS survey best practices 2026.
Customer effort (CES) (10 questions)
- It was easy to find the information I needed. (Strongly disagree – Strongly agree)
- The checkout was quick and straightforward. (Strongly disagree – Strongly agree)
- I was able to resolve my issue without putting in much effort. (Strongly disagree – Strongly agree)
- How much effort did you have to put in to [complete the task]? (Very low – Very high)
- The process was simpler than I expected. (Strongly disagree – Strongly agree)
- I did not need to contact support to complete this. (Strongly disagree – Strongly agree)
- The steps were clear and easy to follow. (Strongly disagree – Strongly agree)
- How would you rate the ease of [signup / purchase / support]? (1–5 or 1–10)
- What would have made this easier? (open-ended)
- Was there any step that felt confusing or unnecessary? (open-ended)
Use CES after a specific task (checkout, signup, support resolution). Low effort correlates with loyalty; use 2–4 questions per touchpoint. When to use: Immediately after the task (e.g. post-checkout, post-signup, or after support resolution). Keep to 2–4 questions so the survey feels light. For effort and empathy in feedback, see empathy-led feedback beyond star ratings and customer loyalty psychology and forms.
Do’s and don’ts (expanded)
Do not rush the draft. Define one goal per survey; map each question to that goal. Test with 2–3 people before launch.
Do allow skips. Let respondents skip optional questions; required fields only where you need an answer to act.
Do not use leading questions. Use neutral wording (e.g. “How would you rate…?” not “Don’t you agree that…?”).
Do time by context. Send post-purchase surveys after delivery or first use; post-support right after resolution; NPS at a natural moment (e.g. after 30 days of use). Do not over-survey; space requests.
Do not act like a machine. Use conversational UI (one question at a time or short steps), friendly microcopy, and a clear reason for the survey.
Do one clear goal per survey. Product OR support OR NPS OR CES—mixing too many goals lengthens the survey and dilutes focus.
Do keep under 3 minutes and use conditional logic. Fewer questions, branch so only relevant questions show. For design depth, see high-impact surveys: 12 best practices. Why loyalty surveys matter: Loyalty drives retention, referrals, and LTV. Without feedback, you guess why people stay or leave. The 43 questions give you a library to measure product, support, NPS, and effort; closing the loop turns that data into action (fix issues, follow up with detractors, share what you changed). Over time, a consistent feedback program improves both scores and behavior. Invite and thank-you: In the invite, state how long the survey takes and how you will use the data (e.g. “Takes 2 minutes; we use your feedback to improve product and support”). On the thank-you page, say what happens next (e.g. “We read every response; you may hear from us if we have a follow-up question”). That sets expectations and builds trust.
Closing the loop
Closing the loop means (1) acting on feedback—fixing issues, following up with detractors, sharing findings internally—and (2) telling customers what you learned and what you will do. Workflows: Use webhooks or integrations so low NPS or low CSAT triggers an alert (e.g. Slack, email to support lead); assign an owner to follow up within 24–48 hours. Detractor follow-up: Reach out personally (email or call) to understand and fix; many detractors can become promoters if you act fast. Share outcomes: Email participants with a short summary (“Here is what we heard; here is what we are changing”) so they see their input mattered. Example workflow: NPS survey → if score 0–6, webhook sends to Slack with respondent email → support or success owner reaches out within 24–48 hours → after fix, log outcome and optionally send a thank-you. For webhooks to automate alerts, see webhooks: instant lead notifications to Slack and email and webhooks: send form submissions to CRM. Tools: Use a form builder with conditional logic, multi-step, and webhooks (e.g. AntForms) so you can build one survey and branch by score or answer, and trigger follow-up automatically.
Strong vs weak feedback program
| Aspect | Weak | Strong |
|---|---|---|
| Goal | Vague or multiple goals per survey | One goal per survey; questions map to it |
| Length | Long; many questions | Under 3 minutes; conditional logic |
| Timing | Random or too frequent | Timed by context (post-purchase, post-support); spaced |
| Follow-up | No follow-up on low scores | Alerts; human follow-up with detractors |
| Closing loop | Data collected but not acted on | Act and tell customers what changed |
| Questions | Leading or generic | Neutral; product/support/NPS/CES as needed |
Measuring loyalty over time: Track NPS or CSAT by cohort (e.g. by signup month or product tier) and by touchpoint (post-purchase, post-support). Watch for trends: is NPS improving after you closed the loop with detractors? Data and reporting: Export responses to a spreadsheet or connect to a BI tool for trend analysis. Tag open-ended responses by theme (e.g. “pricing,” “support speed”) so you can count and report. Share a monthly or quarterly summary with product, support, and leadership; assign owners to top issues. For survey design and qualitative analysis, see the research compass: qualitative vs quantitative data. Who should own feedback: Often product or customer success owns NPS and product surveys; support owns post-support CSAT. Align on one place for alerts and follow-up (e.g. Slack channel or CRM workflow) so detractors are never missed. Scaling feedback: As volume grows, use the same core questions so trends are comparable; add sampling if you cannot survey everyone. Automate alerts and routing so human follow-up focuses on high-value or high-risk accounts. Form builders with webhooks and unlimited responses (e.g. AntForms) let you scale without changing the survey design. Benchmarks and goals: Industry NPS benchmarks vary by sector; focus on your own trend (improving vs declining) and on closing the gap between promoters and detractors. Set an internal goal (e.g. “NPS +5 this quarter” or “CSAT 90%+ on first contact”) and tie it to closing the loop and fixing top issues. Example detractor follow-up: “Hi [Name], we saw your recent feedback and we are sorry we let you down. We would like to understand what went wrong and fix it. Would you have 10 minutes for a quick call this week? [Link to calendar].” Personal, fast follow-up often converts detractors back. Is CES improving after you simplified checkout? Use the same core questions so you can compare over time; add or rotate 1–2 open-ended questions for qualitative depth. For form analytics, see form analytics: what metrics actually matter. Response rate tips: Keep surveys under 3 minutes; send at a relevant moment (e.g. right after support resolution); state how long it takes and how you will use the data; avoid over-surveying the same users. Conditional logic shortens the path so more people complete.
Common pitfalls
- Too many questions in one survey: Stick to one goal and 5–10 questions; use conditional logic for follow-ups.
- No follow-up on low scores: Low NPS or CSAT without follow-up signals that feedback does not matter. Set up alerts and owners.
- Leading questions: They bias results. Use neutral wording and full response ranges.
- Survey fatigue: Sending too many surveys or too long surveys reduces response. Space them and keep each short.
- Not closing the loop: If customers never hear what you did with feedback, they stop responding. Share outcomes and thank them.
- Same survey for everyone: Use conditional logic to branch by segment (e.g. product tier, support channel) so questions are relevant. One size fits all often feels long or irrelevant.
Segmenting results: Break down NPS or CSAT by segment (e.g. product, region, signup cohort) to see where loyalty is strong or weak. Prioritize closing the loop in segments with the most detractors or the biggest drop. Frequency and cadence: Do not send multiple surveys in a short window. Space NPS to quarterly (or after key milestones); post-support and post-purchase can be sent once per touchpoint. For churn and retention surveys, see exit surveys for churn and retention and reducing SaaS churn with exit surveys.
Pre-launch checklist
- One clear goal per survey; questions mapped to it
- 5–10 questions (or equivalent with conditional logic); under 3 minutes
- No leading questions; allow skips where appropriate
- Timed by context (post-purchase, post-support, etc.); not over-surveying
- Conditional logic for follow-ups (e.g. reason for score after NPS)
- Webhooks or alerts for low scores; owner assigned to follow up
- Plan to close the loop (share findings and actions with respondents)
When to use which question group: Use product satisfaction after purchase or at key milestones (e.g. 30 days). Use customer service right after a support contact. Use NPS post-purchase, post-onboarding, or quarterly. Use CES after a specific task (checkout, signup, support resolution). Do not mix all four in one survey; pick one primary goal and add 1–2 questions from another group only if they serve that goal. For response rates, see how to build surveys that get 80%+ response rates. For survey methodology, see how to conduct an online survey in 7 steps.
Frequently asked questions
What are good customer loyalty survey questions?
Use a mix of product satisfaction, customer service, NPS, and CES questions. Pick questions that match your goal; keep under 3 minutes and use conditional logic. See the 43 questions in the four groups above.
How many questions should a customer feedback survey have?
Keep under 3 minutes; often 5–10 questions per survey. Use conditional logic so respondents only see relevant questions. One goal per survey.
What is closing the loop in feedback?
Acting on feedback and telling customers: follow up with detractors, fix issues, share what you learned and what you will do. Low scores trigger alerts; then a human reaches out.
What is the difference between NPS, CSAT, and CES?
NPS = likelihood to recommend (0–10). CSAT = satisfaction (e.g. 1–5). CES = effort (how easy was the task). Use each at the right touchpoint.
When should I send a customer feedback survey?
After purchase, after support, after key product use, or at a cadence (e.g. quarterly). Do not over-survey; time by context.
Summary and next steps
Summary: The 43 survey questions in this guide cover product satisfaction (12), customer service (11), NPS (10), and CES (10). Use one goal per survey, 5–10 questions, under 3 minutes, and conditional logic. Follow the do’s and don’ts; close the loop with alerts, follow-up with detractors, and sharing outcomes. Form builders like AntForms support conditional logic and webhooks so you can build feedback surveys and automate follow-up. Use the “when to use which question group” and sample flows above to choose questions; use the strong vs weak table to audit your program. Track NPS/CSAT over time and tie improvements to closing the loop and fixing recurring issues. Recap: Product (Q1–12): satisfaction, features, upgrades. Support (Q13–23): rep knowledge, resolution, ease. NPS (Q24–33): recommend, reason, testimonial. CES (Q34–43): ease of task, effort. Pick 5–10 per survey, one goal, under 3 minutes, and close the loop. Quick reference by survey type: Product survey → start with Q1, Q3, Q6 (conditional), Q24, Q25. Support survey → Q14, Q21, Q20 or Q23. NPS-only → Q24, Q25, Q27. CES → Q35, Q38, Q42. Each of these sets stays under 2–3 minutes.
Next steps: Pick one survey type (product, support, NPS, or CES) and select 5–10 questions from the list. Set up conditional logic and a webhook or alert for low scores. Plan who will follow up and how you will share results with respondents. For survey flow and response rates, see how to build surveys that get 80%+ response rates. Integrating with CRM and support: Send survey responses to your CRM (e.g. HubSpot, Salesforce) or support tool so NPS/CSAT is attached to the contact or ticket. That way follow-up and closing the loop happen in one place. Use webhooks (e.g. AntForms webhooks) to push submissions to Slack, email, or your CRM. Survey templates by use case: (1) Post-purchase product survey: Q1, Q3, Q24, Q25 (4 questions). (2) Post-support CSAT: Q14, Q21, Q20 (3 questions). (3) Quarterly NPS: Q24, Q25, Q27 (3 questions). (4) Post-checkout CES: Q35, Q38, Q42 (3 questions). Each stays under 2 minutes and has one clear goal.
Key takeaway: 43 survey questions give you product, support, NPS, and CES coverage. Use conditional logic, keep under 3 minutes, and close the loop so feedback improves loyalty. Pick one goal per survey, 5–10 questions from the list, set up alerts for low scores, and follow up with detractors—then share what you changed so customers see their input mattered.
Try AntForms to build feedback surveys with conditional logic and integrations. For more, read 12 customer satisfaction questions, NPS survey best practices, and high-impact surveys: 12 best practices.
