Social media has become a primary source of information for many people. As we navigate through a complex landscape of opinions and perspectives, it’s increasingly common to turn to comment sections for insights from fellow users. However, what happens when these seemingly authentic voices are actually part of a coordinated effort to shape public opinion?
At Exorde Labs, we specialize in using advanced data analytics to uncover patterns and insights that might otherwise go unnoticed. In this blog post, we’ll explore a recent case study where our team identified and analyzed a suspected disinformation campaign targeting discussions about the conflict between Ukraine and Russia.
Our journey began with a broad analysis of conversations related to Ukraine and Russia across multiple languages, including English, Spanish, Portuguese, Russian, Ukrainian, French, Italian, Chinese, Japanese, and Korean. By casting a wide net, we aimed to gather a comprehensive cross-section of posts for our investigation.
On May 22nd, our keyword analysis revealed a significant spike in post activity related to the topic. This anomaly piqued our interest and prompted us to dive deeper into the nature of the conversation.
Using our proprietary AI technology, we analyzed the emotional sentiment of posts on May 22nd across 26 different emotions. Interestingly, we observed a notable increase in emotions associated with anger and annoyance — a pattern often linked to automated accounts employing shock and trigger tactics to sway public opinion.
However, we didn’t detect any significant change in emotions related to sadness, suggesting that the conversation spike wasn’t tied to a major new development in the conflict.
To identify the root cause of the conversation spike, we filtered the sentiment data by channel across different keyword groups. This analysis revealed that posts originating from YouTube experienced a significant sentiment shift on May 22nd.
By focusing on YouTube posts from the target date, we were able to identify the specific video that triggered the sentiment change — a report from a Spanish news publication covering the conflict.
Upon closer examination of the video’s comment section, we discovered a large number of comments that appeared to be generated by AI. Using an AI language model, we determined that out of the 307 comments on the video, 37 (12.05%) were likely created by AI bots.
The AI model flagged comments based on several criteria, including: