overview
Designing for Trust and Engagement in an AI-Powered Commenting Experience
I collaborated with product and engineering to redesign the commenting experience on MSN, aiming to boost participation and build trust around AI-generated content (AIGC). Our goal was to make AIGC feel more reliable and engaging by rethinking how information, polls, and discussions surfaced, through clearer signals, more visible activity, and lower-friction entry points. This included new features like contexual AI generated poll questions, AI generated summaries and quick comments.
The challenge
Users are reading news, but skipping the conversation
The challenge was to create a commenting experience that felt natural and worth joining. Instead of static reading, I wanted to encourage dynamic discussion. That meant rethinking information hierarchy, surfacing polls and discussions more intuitively, and reducing friction to comment through features like quick comments, pop-up cards and better visual cues.



Solution: canvas refinements
Canvas Refinements
Before
After
Establishing AI Credibility Through Citation Placement
One detail kept coming up during our design conversations: if users can’t easily see when AI is involved, trust breaks down. So I asked, how do we make authorship obvious without disrupting the experience?
To answer this, I looked at patterns people already trust - academic citations. I proposed placing the AI label in the top-right corner, similar to APA and MLA formats. These familiar standards consistently display source information in the top right, making it easy for users to recognize authorship and build confidence in the content.
APA Journal Article Format
Journal Title + Year, Volume, Issue, Page Range
Top right corner
APA Journal Article Format
Journal Title + Year, Volume, Issue, Page Range
Top right corner
MLA Format
Author's Last Name + Page Number Shown
Top right corner
My Rationale
Placing the label in the top-right corner reinforced transparency and credibility without adding friction.
However, after discussion with the team we decided for a left-aligned label, citing reading patterns and F-shaped scanning behavior (users tend to read from left to right). This sparked a valuable debate about whether to prioritize visual familiarity (top-right = source) or behavioral readability (left-first scanning).
Why It Mattered
After discussing my proposal with the product team, we ultimately placed the label on the left for better scannability. The discussion sharpened our shared understanding of information hierarchy and trust signaling.
This led to further design iterations and user testing, including testing variations in placement and label phrasing, to ensure users could clearly and confidently identify AI-generated content without interrupting their reading flow.
Research and insights
Research and insights
The core research question:
Does the placement and copy of the labeling affect users’ understanding of what is AI-generated content (AIGC)?
Findings from this study informed key design decisions around transparency, trust, and responsible AI integration.
UX Labs 1: Identifying AI in the Experience
Scenario:
Imagine that you have answered a poll question about the U.S stock market.
Questions:
Parts of this experience are powered by AI. Which do you think are powered by AI? Take a look at the image and list any parts that you think are powered by AI.
Why?
degradowski
laurynworley
warris
Results = 36 total participants
🔍 Key Insights from 36 Participants:
31% believed both the poll question and the background were AI-driven, which suggests a general openness to AI involvement across different UI elements when insights are presented.
Only 14% thought the poll question itself was generated by AI, reflecting a perception that polls feel more user-initiated.
Design Implications
Why It Matters
Users were more likely to trust and engage with the content once they understood what was created by AI. Clarifying attribution helped reduce skepticism, reinforce transparency, and create a foundation of trust that’s crucial in news contexts involving AI summarization.
UX Labs 2: Testing Label Clarity: 'Insights from AI' vs. 'Powered by AI'
To understand how users interpret AI labels, I ran a targeted A/B test comparing two framing approaches. Initial feedback on our metadata labels revealed friction. To unpack this further, I ran a study comparing three UI treatments to understand which one best supported scannability and comprehension.
To understand how users interpret AI attribution labels, we tested two design variations:
Concept A: “Insights from AI” (text label)
Concept B: “Powered by AI” with a sparkle icon
Scenario:
Imagine you've answered a poll relating to your thoughts on the U.S stock market. AI generates both the poll and the background information.
Question:
Which of these images best show that?
Concept A: Insights from AI Top Right
Concept B: Powered by AI Sparkle Icon Top Right
🔍 Key Insights from 43 Participants (p=0.033)
While Concept B ("Powered by AI") was slightly preferred, the result was not statistically significant (p=0.033). However, feedback from participants offered valuable insight into how language and visual framing shape trust and understanding.
While both labels communicated AI involvement, participants saw “Powered by AI” (with a sparkle icon) as clearer, more system-driven, and trustworthy. “Insights from AI” felt editorial and vague to many, which created confusion about what the AI actually contributed.
Design Decision
I adopted “Powered by AI” as the preferred label and paired it with a recognizable sparkle icon. I also improved visual hierarchy to make the labeling more scannable and cohesive with the rest of the UI.
Even without a conclusive winner, this test helped surface users’ mental models of how AI works in the experience, and it informed our next iteration on AI transparency and labeling.
Why It Matters
Stronger and more explicit AI attribution improved users' mental models of how the system worked, reinforcing transparency. This helped users feel more confident in what they were reading—and made AI involvement feel intentional rather than hidden.
UX Labs 3: Clarifying Poll Metadata: Text, Icons, or Both?
After refining how we labeled AI-generated content, the next challenge was ensuring users could clearly interpret poll-related interaction metadata, like how many people voted, commented, and when the poll was posted.
While the trust label helped clarify AI's involvement, users still struggled with understanding basic engagement signals, especially when labels were missing or icons were unclear. To address this, I conducted a third UX lab to test different treatments for poll metadata legibility and scannability.
Scenario:
Imagine you've answered a poll question about the U.S stock market and you want to know how many total people voted and commented, and when the poll was posted (shown under the poll question).
Question:
Which treatment do you think best shows this information?

🔍 Key Insights from 150 Participants (p=0.003)
With 150 participants, Concept A emerged as the statistically significant winner (p=0.003), beating both B and C designs.
Easy Scan and Understand
Users strongly preferred Concept A (icon + text), saying it made poll activity (votes, comments, time) more scannable and understandable at a glance.
lonpatriot
Icons Alone Create Ambiguity
Concept B (icon only) led to confusion. Users didn’t always recognize what the symbols meant without accompanying labels.
gwynpatadia
Text-Only Requires Extra Effort
Concept C (units only) was readable but felt dated and harder to parse quickly. Users missed the visual cues that help with scanning and recognition.
laurynworley
Words Like "Votes" and "Comments" Are Necessary
Microcopy mattered: participants explicitly mentioned needing unit labels (e.g. "votes") to grasp the meaning behind the numbers. Their absence made the experience feel incomplete.
barparde
Words Like "Votes" and "Comments" Are Necessary
Microcopy mattered: participants explicitly mentioned needing unit labels (e.g. "votes") to grasp the meaning behind the numbers. Their absence made the experience feel incomplete.
lylaohare
Design Outcomes
Final Direction: Adopted Concept A - Combining icons with unit labels for all metadata (votes, comments, timestamp).
Microcopy reinforced for clarity - retained explicit words like "votes" and "comments" instead of relying on interpretation.
Improved UI legibility and hierarchy - ensured that interaction metadata is both easy to spot and semantically clear, especially for quick-glance use cases.
Standardized treatment across surfaces - this pattern became the baseline for poll modules and future community interaction components.
AI Suggested Comments
To complement the AI-generated summaries and polls, I explored how AI could also support users in participating more easily. One promising direction was using poll answers to generate suggested comments. These quick prompts lowers the barrier to entry for engagement, especially for users who didn’t know what to say but wanted to contribute.
Before

After

Although this design lowered the barrier of entry to commenting, I raised concerns that letting users post AI-suggested comments without adding their own input could lead to low-effort responses and bot-like behavior. To address this, the PM and I aligned on a solution: use full sentence suggestions to kickstart empty threads, and shift to unfinished prompts in active ones to encourage more thoughtful, user-generated input.
Encouraging conversation with AI Suggestions when there are no comments

Thread State
AI Suggestion Type
Empty Thread
5+ Comments
Unfinished phrases to encourage user participation
Goal: Balanced ease of entry with authenticity
Impact: Increased trust, reduced bot-like behavior, improved comment quality
Early in the design process, we explored giving users full, AI-generated comments to post with a single click. While this seemed like a fast way to boost engagement, I flagged concerns about authenticity, trust, and the potential for bot-like behavior dominating conversations. This frictionless approach risked turning discussions into a feed of AI-authored text, undermining our goal of building genuine dialogue.
To solve this, I proposed a nuanced alternative: dependent clauses generated from poll answers. Instead of complete sentences, we offered open-ended prompts that users had to finish themselves. This small design choice introduced just enough friction to make responses personal, without overwhelming users.
Design Impact
The impact was clear, threads maintained a human tone, and early usability feedback showed users felt “guided, but still in control.” This approach also aligned with our trust and quality metrics by reducing automated-looking comments and encouraging thoughtful participation.
Projected Engagement Lift: +12% in comment activity during early A/B tests.
Quality Signals: 28% more comments with original phrasing versus full-sentence AI suggestions.
User Feedback: “Helpful without feeling canned,” “Makes me think before posting.”