Google has taken action on over 1 lakh harmful posts

During May and June this year, Google deleted more than 1 lakh harmful posts in compliance with the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021–IT Rules—and other relevant legislation.

In its transparency report, the company stated that it has deleted 634,357 postings for May 2021 and 526,866 posts for June 2021, respectively.

Google has begun producing a monthly transparency report, which it will do every month going forward.

According to the company, it has responded to user concerns in India and removed content as a consequence of automatic detection processes across all of the company’s platforms.

As a result of automatic detection methods used across Google’s platforms, removal actions performed are classed as Significant Social Media Intermediaries (SSMIs). Also, check Twitter privacy policy changes.

The vast majority of postings that were removed were due to copyright infringement; a staggering 95.6 per cent of posts were removed due to copyright violations, according to the statistics.

34,685 posts were copyrighted (95.6 per cent)

Other Legal: 620 (1.7 per cent)

Trademark: 423 (1.2 per cent)

Defamation: 382 (1.1 per cent)

Counterfeit: 89 (0.2 per cent)

Circumvention: 29 (0.1 per cent)

Graphic Sexual Content: 19 (0.1%)

Impersonation: 12 (0.0%)

Court Order: 6 (0.0%)

As a consequence of user complaints, 83,613 postings were removed, while 5,26,866 posts were removed as a result of automated detection, according to a report from Google.

The process of removing posts from Google is now underway.

Complaints submitted to Google

There are a lot of reasons why the company receives complaints. Each every “item” in a given complaint has a unique URL that can only be found there.

A single complaint may specify numerous things that may or may not be related to the same or separate pieces of material, depending on the circumstances.

When Google receives individual user complaints about allegedly unlawful or harmful content, the business evaluates each item to determine if it violates Google’s Community Guidelines or content standards, or if it complies with local legal requirements for removal.

Posts are being identified automatically.

In addition to user reports, as tej company has been investing in battling dangerous content online and using technology to detect and remove it from the platforms, the company has received several complaints.

When used in conjunction with certain of Google’s products, automated detection methods can prevent the transmission of dangerous information such as child sexual abuse material and violent extremist content.

Removed content that violates Google’s Community Guidelines and content rules, restricts content (e.g., age-restricted content that may not be acceptable for all audiences), and leaves content online when it does not break our guidelines or content regulations.

Google can act more swiftly and correctly when it comes to enforcing its rules and regulations thanks to automated detection.

Removal actions may result in the removal of material or the termination of a bad actor’s access to the Google service as a result of these actions.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.