Feature Requests

A function that allows you to check the actual survey responses of each team member on a question-by-question basis
As a manager using Attuned's engagement analysis, I want a feature that allows me to check how each respondent actually answered each question in the survey — whether they chose “not applicable at all,” “slightly untrue,” “slightly applicable,” “true,” or “very applicable.” Currently, engagement results are aggregated. You can see a summary score for each motivator, but you can't see how specific members of the team answered specific questions. For example, even if you want to know if someone has issues with “growth” or “feedback,” there's no way to know that. The current situation is that there is no choice but to guess by looking at team-level aggregated values. The reason this is important is that aggregated scores may hide actual conditions on an individual level. Even though the team's “safety” score appears to be stable, one person may feel increasingly uneasy about employment stability, while another may feel at ease. Or even though the “creativity” score is flat, it may be hidden that some members actually chose “not applicable” in a question about room for experimentation. If you don't see each member's actual responses, you'll completely overlook these signals and don't know where to focus your 1on1 efforts. I understand that displaying the 6-step answers as is for each individual may be too detailed than is necessary for practical purposes. Similar to how the current engagement page shows a breakdown of satisfaction levels, it is sufficient to group responses into 3 categories: positive (slightly applicable/applicable/very applicable), negative (completely unapplicable/unapplicable/slightly unapplicable), and neutral. What's important is being able to grasp which direction each member's consciousness is moving, and it's not an accurate point on the scale. Regarding privacy: We fully understand that Attuned was designed based on trust, and that anonymous and aggregated engagement results are at its core. I think results are meaningful only when there is an environment where respondents can answer honestly with peace of mind. We respect that design philosophy. On top of that, I think there is room for opt-ins on a corporate basis. If the survey itself clearly states to respondents that “responses are not anonymous” and “will only be used to improve the team's situation,” and clearly promises that there will be no retaliation for negative responses or rewards for positive responses, then it may be OK for companies to enable this level of visualization. What is important is that the respondent understood and agreed to it beforehand. EXPECTED FEATURES: In addition to an aggregated score, a function that allows you to check each respondent's response trend (positive/neutral/negative) for each question — even actual ratings if privacy allows Option to filter by individual motivators (e.g. safety, competitiveness, altruism) or display across 11 motivators Account-level settings that companies can enable when it is clear to respondents that the survey is non-anonymous With this feature, engagement pages are much more actionable. Instead of guessing from aggregated numbers, it is possible to accurately grasp what state each member is in, and perform 1on1 in a more targeted manner. Supplement : Being able to compare these individual responses over time across survey batches would be even more powerful. I'm posting about that as a separate request: サーベイ期間を横断して質問単位のエンゲージメントスコアを簡単に比較できる機能
0
·
Engagement Survey
A feature that allows you to easily compare engagement scores for each question across the survey period
As a manager who uses Attuned's engagement analysis to track team engagement, I want a feature that allows me to easily compare how individual question scores change over the survey period. Currently, when trying to grasp how specific questions are evolving, the only way is to manually switch between survey batches using the dropdown and make your own notes and compare the numbers. Even biweekly surveys take quite a bit of time, but in monthly and quarterly survey cycles, multiple questions are aggregated into a single score for one motivator, so it's virtually impossible to track the performance of individual questions over time. This is important because individual question results may be moving in the opposite direction even when the motivator's aggregate score appears flat. For example, even if a team's “safety” score appears to be stable at first glance, questions about job stability are trending down, while questions about workplace safety may be trending upward. Or maybe it's hidden that even though “creativity” scores remain flat, responses to questions about room for experimentation have continued to decline over several months. If you don't see question-level trends, you'll completely overlook these signals and don't know where to focus on team policies. EXPECTED FEATURES: A view where scores per question can be compared side by side across multiple survey batches (ideally 6 to 12 months or more) A simple trend line or table showing how each question's score changed over time Ability to identify overall motivators (e.g. growth, feedback, autonomy) as well as which specific questions within them are improving or declining With this feature, engagement pages are much more actionable. You can pinpoint which aspects of your motivators need attention, and you can make your conversations with your team more targeted. Supplement : I'm posting it as a separate request for the ability to check each team member's actual responses: チームメンバーごとの実際のサーベイ回答を質問単位で確認できる機能 . That is a function to check who answered and how, and this is a function to compare question scores in chronological order. They complement each other, but neither presupposes the other.
0
·
Engagement Survey