SocArXiv releases AI policy

by Philip N. Cohen, SocArXiv director

Our new AI policy is here.

Our general moderation policy is here.

In November 2025, we paused new submissions about AI topics. For about three months, we turned away papers about AI models, testing AI models, proposing AI models, theories about the future of AI and so on. We accepted some empirical social science research about AI in society on a case-by-case basis. The purpose of this pause was to ease pressure on our moderators and encourage AI-oriented authors to find other ways of distributing their work — and give us time to craft a policy.

The research and drafting of this policy was done by a subcommittee of our moderation team and steering committee. They were: Alex Hanna, Rebecca Kennison, Sam Koreman, Pamela Oliver, and myself (Philip Cohen). Then we discussed the proposed policy among the full team and committee, and made additional revisions.

We want a policy that will help us protect the epistemic commons from slop that dilutes our work, without spending too much time on each paper. We want to balance the needs and interests of scholars who rely on a more reliable research ecosystem (whether they post papers with us or not), moderators who have to face the queue of papers each day, and the many independent researchers who want to meet scholarly standards with their work but need guidance to do so. We are not trying to monopolize social science research dissemination, and are happy to turn away work that can be more comfortable somewhere else (like aiXiv, which accepts work produced by large language models).

We feel the need to express our humanity in this process. We insist on making human judgments, using our perception, judgment, and experience – and will not defer to automated systems, or enter into a technological arms race to defeat the (people who run the) machines. We strive for fairness, but make no promise of an algorithmically pure policy.

We hope this policy will help us meet our goals, which include disseminating valuable research better and faster, helping scholars understand what types of work that are unacceptable, keeping out fraudulent research and reducing the volume of LLM-generated content, reducing moderation burden, and encouraging honest disclosure of tools and methods.

Because of how quickly things are changing, we expect to revisit this policy periodically. We welcome your feedback, and will do our best to handle appeals.

Leave a comment