TweetGuard
Role: UX Designer, Project Lead
Tools + Methods: Figma, Secondary Research
Deliverables: Demo, Lo-Fi Prototype
Collaborators: Yasmin Alemaddine, Heidi He,
Siqi Ji, Jorge Bello
In 2020, I took a class called Tech, Media, Democracy that at the end of the semester culminated in a hackathon. The class was a collaboration between five New York City-based grad schools, and centered on the topics of Big Tech, Misinformation, Journalism, and more. TweetGuard is the result of my team’s hackathon project, which addressed Twitter as a misinformation spreader, and aided Twitter users in discovering if they had encountered or spread misinformation unknowingly.
Full Case StudyThe TweetGuard landing page explains to users our goals and intent in creating this tool, and gives information about how the tool works in the spirit of transparency. This is also where they can add their Twitter handle and sign into their accounts so that when the results come up, they can take action straight from our page.
The results page has two primary sections: bot detection, and misinformation within the content of the user's tweets. In the latter, any questionable, or flagged information will be highlighted to inform the user on what it was that was false so that they can start to detect patterns in the falsities. If an account they follow is a known bot, or known to be untrustworthy, they can follow, report, or block that user straight from TweetGuard. With their tweets, they can take similar actions.