Reddit to update web standard to block automated website scraping

  • 📰 ChannelNewsAsia
  • ⏱ Reading Time:
  • 29 sec. here
  • 2 min. at publisher
  • 📊 Quality Score:
  • News: 15%
  • Publisher: 66%

Ai Ai Headlines News

Ai Ai Latest News,Ai Ai Headlines

Social media platform Reddit said on Tuesday it will update a web standard used by the platform to block automated data scraping from its website, following reports that AI startups were bypassing the rule to gather content for their systems.

FILE PHOTO: Reddit's logo is displayed, at the New York Stock Exchange in New York City, U.S., March 21, 2024. REUTERS/Brendan McDermid/File Photo

Reddit said that it would update the Robots Exclusion Protocol, or"robots.txt," a widely accepted standard meant to determine which parts of a site are allowed to be crawled. More recently, robots.txt has become a key tool that publishers employ to prevent tech companies from using their content free-of-charge to train AI algorithms and create summaries in response to some search queries.

This follows a Wired investigation which found that AI search startup Perplexity likely bypassed efforts to block its web crawler via robots.txt.

 

Thank you for your comment. Your comment will be published after being reviewed.
Please try again later.
We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

 /  🏆 6. in ERROR

Ai Ai Latest News, Ai Ai Headlines