When a colleague gives you 360 feedback in Moral Fabric, the bottom of the form has an optional section: a short list of , each rated on a five-point scale. This article explains what's there, why frequency rather than score, and how workspaces shape their own list. A handful of named qualities, each with a one-line description. For each, the feedbacker picks a frequency:
- Rarely
- Sometimes
- Often
- Usually
- Almost always
Or — a real choice, not a hidden default. Or skip the section.
The instruction at the top reads: "These are snap judgments — trust your gut. Skip any you're unsure about."
The whole section takes about thirty seconds.
Most rating systems ask "how good is this person at X?" That question slides into evaluation, and from evaluation into bias.
Frequency questions ask something different: "how often do you see this?" The answer is anchored in what the rater has actually witnessed, not in what they think the person deserves. It's harder to be unfair to someone when you have to point to behaviour.
Five points is enough resolution to matter and few enough that "Not sure" stays a real option rather than a polite middle.
In the feedback view, two things appear once ratings come in:
- One axis per quality, averaged across respondents.
- Average and respondent count per quality.
Today, a quality only shows a number once two or more colleagues have rated it; below that, the value displays as "—". The reasoning is statistical — a single snap judgment is too thin to read as a pattern. Note that this is not an anonymity rule: Moral Fabric's 360 isn't anonymous, and the recipient sees who wrote each open answer (see 360 feedback → Who sees what ). We're moving toward showing single ratings with a "small sample" warning rather than hiding them — a snap judgment from one person is still useful, with the right framing. The platform ships with seven SMA qualities. They're a working theory of what makes someone effective in a self-managing, mission-driven org:
You own your work, your roles, and your decisions.
You focus where it creates the most change.
You widen the circle — who you work with, who you include.
You seek what challenges you and change your mind on evidence.
You're clear because you care. Honest, role-addressed, positive intent.
You eagerly reach for tools, AI, and automation to multiply your impact.
You sustain your ambition by knowing your rhythm and setting boundaries.
Each links to a longer write-up. Workspaces are not stuck with these — they're a starting point, not a doctrine.
Admins manage the list at . From there:
- Turn the section on or off. Off means the feedback form skips it entirely.
- Seed the SMA defaults, then edit from there.
- Add, edit, archive, or reorder qualities.
Archiving a quality removes it from new forms but preserves the historical ratings — old summaries don't break, and the radar chart for past rounds still renders.
These two are easy to confuse. They're different.
A person in a specific role
Peers, inside a 360 round
The role-holder (and, soon, their team lead)
1–3 on Skill / Energy / Attention
At the bottom of a feedback form
Snapshot of how someone shows up
Honest self-signal of fit
Both can be true at once. A peer might mark you "Almost always" on Clarity while you rate yourself "Learning" on Skill in your Bookkeeping role. They answer different questions.
Qualities are optional. A round runs fine without them. They're worth turning on when:
- You want . The radar shifts over time; the open answers don't compare cleanly.
- You want a for reluctant feedbackers — someone who freezes at a blank text box can still rate seven qualities in a minute.
- You want a . Named qualities give the team words to use outside feedback rounds too.
If none of those fit, turn the section off and let the open questions carry the round.
Quality ratings are part of a feedback response and inherit the same access rules — see Who can see your data for the full picture.