As a manager using Attuned's engagement analysis, I want a feature that allows me to check how each respondent actually answered each question in the survey — whether they chose “not applicable at all,” “slightly untrue,” “slightly applicable,” “true,” or “very applicable.” Currently, engagement results are aggregated. You can see a summary score for each motivator, but you can't see how specific members of the team answered specific questions. For example, even if you want to know if someone has issues with “growth” or “feedback,” there's no way to know that. The current situation is that there is no choice but to guess by looking at team-level aggregated values. The reason this is important is that aggregated scores may hide actual conditions on an individual level. Even though the team's “safety” score appears to be stable, one person may feel increasingly uneasy about employment stability, while another may feel at ease. Or even though the “creativity” score is flat, it may be hidden that some members actually chose “not applicable” in a question about room for experimentation. If you don't see each member's actual responses, you'll completely overlook these signals and don't know where to focus your 1on1 efforts. I understand that displaying the 6-step answers as is for each individual may be too detailed than is necessary for practical purposes. Similar to how the current engagement page shows a breakdown of satisfaction levels, it is sufficient to group responses into 3 categories: positive (slightly applicable/applicable/very applicable), negative (completely unapplicable/unapplicable/slightly unapplicable), and neutral. What's important is being able to grasp which direction each member's consciousness is moving, and it's not an accurate point on the scale. Regarding privacy: We fully understand that Attuned was designed based on trust, and that anonymous and aggregated engagement results are at its core. I think results are meaningful only when there is an environment where respondents can answer honestly with peace of mind. We respect that design philosophy. On top of that, I think there is room for opt-ins on a corporate basis. If the survey itself clearly states to respondents that “responses are not anonymous” and “will only be used to improve the team's situation,” and clearly promises that there will be no retaliation for negative responses or rewards for positive responses, then it may be OK for companies to enable this level of visualization. What is important is that the respondent understood and agreed to it beforehand. EXPECTED FEATURES: In addition to an aggregated score, a function that allows you to check each respondent's response trend (positive/neutral/negative) for each question — even actual ratings if privacy allows Option to filter by individual motivators (e.g. safety, competitiveness, altruism) or display across 11 motivators Account-level settings that companies can enable when it is clear to respondents that the survey is non-anonymous With this feature, engagement pages are much more actionable. Instead of guessing from aggregated numbers, it is possible to accurately grasp what state each member is in, and perform 1on1 in a more targeted manner. Supplement : Being able to compare these individual responses over time across survey batches would be even more powerful. I'm posting about that as a separate request: サーベイ期間を横断して質問単位のエンゲージメントスコアを簡単に比較できる機能