Investigate that Tech: LinkedIn
What happens when you have three times the amount of homework for a week, a concert on the weekend, and you realize you need to keep up a New Year’s Resolution? Look for something interesting that you’ve already written about! I wanted to show off a small paper I needed to write about the growing amount of AI being seen on LinkedIn, which I also needed to finish by 11:59 PM today. I hope you enjoy and that it makes you think about how AI is making its way into every part of our lives:
The Paper
I like to call LinkedIn “Business Facebook,” a platform where memories shared with families and friends are replaced by professionals sharing their career highlights and making connections within their field of expertise. As a math major interested in AI in both scientific research and the greater technology sector, LinkedIn has been invaluable for getting exposure to research and job opportunities as well as meeting like-minded individuals eager to discuss how furthering mathematics can influence global change through AI. However, the increasing integration of AI into the platform raises questions about its impact on user experience, content quality, and equitable networking opportunities. LinkedIn’s opaque and volatile use of AI, both by the platform and its users, undermines equitable professional networking by privileging pro-AI narratives, amplifying shallow content, masking systemic biases under the guise of algorithmic neutrality, and playing on information imbalances to further leverage AI.
LinkedIn claims its AI algorithms enhance user experience and promote equity in professional networking. They even publish open articles about how they are improving their algorithms in their technical blogposts and quote statistics like a “5.44% increase in invitations sent to [Infrequent Members (IMs)] and a 4.8% increase in connections made by IMs, while remaining neutral on the respective metrics for [Frequent members (FMs)].” (Yin) However, while LinkedIn claims to improve equity metrics, users have limited visibility into how algorithms actually function. The company’s closed-source approach creates an information asymmetry where users must take claims at face value. Arvind Narayanan and Sayash Kapoor argue that predictive AI-such as the algorithms used by LinkedIn to recommend connections or screen job candidates-often fails to work as advertised (like this case where Linkedin is arguing their algorithms can create less homogeneous networks), especially when it comes to making consequential decisions about people’s lives and careers. They highlight that these systems are typically opaque, rarely subject to independent validation, and can perpetuate or even worsen existing inequalities, as seen in hiring and risk assessment tools across industries; in the case of LinkedIn, the lack of transparency around how AI-driven recommendations are made leaves users vulnerable to hidden biases and arbitrary outcomes, a hallmark of what the authors term “AI snake oil.”
Even with the many claimed benefits of AI for their users, LinkedIn has many instances where AI can be actively harming those same individuals. One notable example is the use of AI by users to post more frequently and send connection messages, which Miguel Bernas notes as a “malaise of social media.” On what was supposed to be an inherently social platform, people have stopped thinking about LinkedIn as a social community and instead as a competition of gaining the most followers, even using automated, generic posts to churn out content (Bernas). LinkedIn’s algorithms also appear to disproportionately promote users in the business and technology sectors, particularly those who highlight “AI” in their profiles and in their posts. In contrast, professionals from non-technical backgrounds or those critical of current industry trends often receive less visibility. This pattern suggests a misalignment between LinkedIn’s stated mission-serving all professionals-and its actual platform dynamics. Whether this results from user demographics or algorithmic prioritization, the effect is the same: certain voices are systematically marginalized, undermining the platform’s claim of inclusivity (“LinkedIn: Log In or Sign Up.”).
LinkedIn’s advertising capabilities raise additional concerns about equity and fairness, while also provoking questions about its power as a data broker. While LinkedIn claims its ads cannot target “protected personal characteristics like gender, age, or actual or perceived race/ethnicity”, the platform permits targeting combinations of categories like companies worked at, job experience, and education that may effectively serve as proxies for discrimination (“Targeting Options for LinkedIn Ads”). A theoretical example for this could be filtering by a young age combined with a large work experience amount to obscure the opportunity from young people not fortunate enough to be able to gain work experience from a young age due to other responsibilities. This technical loophole potentially allows employers to circumvent anti-discrimination protections while maintaining plausible deniability by filtering . Also, while businesses can buy LinkedIn data, users may find it difficult to delete or even locate what data LinkedIn has on them; it is given in a heap of CSVs files you need to parse yourself, which for the average user can make it challenging to understand the amount and variety of data LinkedIn has on them.
These issues all can be explained by the fact that LinkedIn works through information imbalance: as a databroker with the information of over 69 million companies and over a billion individuals (Osman), it’s no surprise that LinkedIn makes money through providing the information they have to companies that don’t, mainly through by buying LinkedIn’s user data or on-platform advertising (“Targeting Options for LinkedIn Ads.”) (“LinkDB - Buy an Exhaustive Database of LinkedIn Profiles.”). The methods extend further however: while information about LinkedIn’s use of AI is rarely shown to users, usually in technical blog posts about how an algorithm is improved, businesses see far more information from LinkedIn, especially in their Automated Recruiter (“New AI-Assisted Search and Projects in Recruiter.”), which promises to use Generative AI to turn natural language prompts into automatic filters for finding viable employees on LinkedIn. The issue with automated filters like the one LinkedIn is using is that the technology they use can also be biased, especially those who don’t conform to traditional career progress norms (resume scanners may get confused at their career path and mark down their ability to do a certain job) or don’t work to make their resume algorithm-friendly rather than human-readable (for example, they don’t use buzzwords in their skillset descriptions because they want to emphasize their nuanced expertise).
To address these concerns, LinkedIn should prioritize transparency in its algorithmic systems. Users deserve clearer information about what data the platform collects and how it informs algorithmic decisions. The current process for accessing personal data is unnecessarily tedious, requiring users to parse multiple CSV files with arbitrary information cutoffs. LinkedIn could adopt algorithmic auditing practices similar to those mandated by the EU Digital Services Act, making audit results publicly accessible. Additionally, implementing clear guidelines for AI usage on the platform and mandating AI-generated content labels (similar to Meta’s “Imagined with AI” tags) would help users distinguish between authentic human contributions and algorithmically-generated material. If LinkedIn truly aims to promote equity in professional networking, it must move beyond superficial diversity metrics to address the structural issues that systematically advantage certain voices while marginalizing others.
Bibliography: Bernas, Miguel. “How AI Is Ruining LinkedIn | LinkedIn.” LinkedIn, 2 Mar. 2025, https://www.LinkedIn.com/pulse/how-ai-ruining-LinkedIn-miguel-bernas-extgc/. Cook, Jodie. “7 Signs Of AI-Generated Content On LinkedIn.” Forbes, https://www.forbes.com/sites/jodiecook/2025/03/20/7-signs-of-ai-generated-content-on-LinkedIn/. Accessed 17 May 2025. Forbes Post | LinkedIn. https://www.LinkedIn.com/posts/forbes-magazine_garbage-in-garbage-out-perplexity-spreads-activity-7212182947787886592-Eyx2/. Accessed 17 May 2025. Krastev, Alexander. “Is Artificial Intelligence Killing LinkedIn? | LinkedIn.” LinkedIn, 25 Apr. 2025, https://www.LinkedIn.com/pulse/artificial-intelligence-killing-LinkedIn-alexander-krastev–fotff/. “LinkDB - Buy an Exhaustive Database of LinkedIn Profiles.” LinkDB - Buy an Exhaustive Database of LinkedIn Profiles, https://nubela.co/proxycurl/linkdb. Accessed 18 May 2025. “LinkedIn: Log In or Sign Up.” LinkedIn, https://www.LinkedIn.com/. Accessed 17 May 2025. Narayanan, Arvind, and Sayash Kapoor. AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference. Princeton University Press, 2024. “New AI-Assisted Search and Projects in Recruiter.” LinkedIn, https://business.LinkedIn.com/talent-solutions/ai-assisted-search-and-projects. Accessed 16 May 2025. Osman, Maddy. “Mind-Blowing LinkedIn Statistics and Facts.” Kinsta®, 8 Oct. 2018, https://kinsta.com/blog/LinkedIn-statistics/. Panel®, Expert. “10 Ways Labeling AI Content Will Change Marketing On LinkedIn.” Forbes, https://www.forbes.com/councils/forbesagencycouncil/2024/08/01/10-ways-labeling-ai-content-will-change-marketing-on-LinkedIn/. Accessed 16 May 2025. “Targeting Options for LinkedIn Ads.” Marketing Solutions Help, https://www.LinkedIn.com/help/lms/answer/a424655. Accessed 16 May 2025. Yin, Qiannan. “Optimizing People You May Know (PYMK) for Equity in Network Creation.” LinkedIn, 5 Aug. 2021, https://www.LinkedIn.com/blog/engineering/member-customer-experience/optimizing-pymk-for-equity-in-network-creation.