Meta said it’s trying out facial-recognition tech to help catch scam ads that use fake or stolen images of celebrities, especially AI-generated ones. These ads often trick people into fake investment schemes, so tools like this could help filter them out. At the same time, it raises privacy questions because users want protection from scams but also don’t want platforms scanning every face they see. Through this claim, I want to clarify what Meta is actually doing versus what people assume. It also raises ethical questions about how much surveillance is appropriate in order to protect users.