You are currently viewing Deepfake is not in our interest; co complies with all local laws: YouTube

Deepfake is not in our interest; co complies with all local laws: YouTube


Deepfake videos are not in the interest of YouTube as none of the stakeholders want to associate themselves with platforms that allow fake news or misinformation, a senior company official has said.

YouTube, India Director, Ishan John Chatterjee on Thursday said the firm complies with all local laws and continues to engage with the government on all emerging issues.

“I want to reiterate that misinformation, in general, and deepfakes in AI is actually not in our interest at all. As a platform, if you look at the different stakeholders that we serve, and let us take the three broad ones i.e. users or viewers, creators and advertisers, none of them want to be associated with a platform that allows fake news (or) misinformation,” Chatterjee said.

YouTube’s intentions are very aligned with the government and key stakeholders that it has to address, he added.

“We comply with all local laws and continue to engage with the government and industry stakeholders on emerging issues.” He was responding to a query related to Union Minister Rajeev Chandrasekhar‘s comment that social media platforms have not aligned their terms of use as per new IT rules that were notified in October 2022.

Union Cabinet Minister for Electronics and IT Ashwini Vaishnaw and Minister of State for Electronics and IT Rajeev Chandrasekhar have directed social media platforms to strictly act against deepfakes.

Vaishnaw had said that the government would come out with new guidelines to fight deepfakes.

Chandrasekhar had asked social media firms to update their user’s policy as per the IT rules notified in October 2022.

@media (max-width: 769px)
.thumbnailWrapper
width:6.62rem !important;

.alsoReadTitleImage
min-width: 81px !important;
min-height: 81px !important;

.alsoReadMainTitleText
font-size: 14px !important;
line-height: 20px !important;

.alsoReadHeadText
font-size: 24px !important;
line-height: 20px !important;

Also Read

Govt to meet social media platforms over deepfake concerns

The government swung into action after several celebrities reported about their deepfake images and videos in circulation.

Prime Minister Narendra Modi has also flagged issues around deepfakes.

YouTube Director Global Head of Responsibility Timothy Katz said the company removes content that is not compliant with policy and tries to recommend videos from credible sources in cases where the content impacts somebody’s finances, election etc.

The company, in its compliance report for the second quarter of 2023, mentioned having removed over 78,000 videos globally for violating misinformation, including videos that violated its manipulated content policy policies.

The video platform also removed 9,63,000 videos for violating its spam, deceptive practices and scams policy.

Katz said YouTube has developed tools to recommend content from authoritative and credible sources when it comes to news and other information related to sensitive content.

“Our systems are trained to elevate authoritative sources higher in search results. Whether you are searching for something evergreen or a current event, we aim to surface videos from sources like public health authorities, research institutions, and news outlets within at least the top 10 search results,” he said.

YouTube uses classifiers to identify whether a video is “authoritative”, and these classifications rely on human evaluators, who assess the quality of information in each channel or video, Katz noted.

“To determine authoritativeness, evaluators answer a few key questions, which our systems then extrapolate at scale. Their answers and more determine how authoritative a video is. The higher the score, the more the video is promoted when it comes to news and information content,” he said.

To tighten norms against deepfakes, Google on Wednesday said that content creators on YouTube will need to disclose any altered or synthetic content that they post on the platform.

“We require content creators to disclose that it is synthetic media. We want to make sure that this can be made available to our users. We want to label content that looks realistic and could potentially be deceiving to users.

“We want to make sure that we are labelling that content for users. And then the third is that we want to make sure that people have access to privacy so that we will enable proper removal of AI-generated or other synthetic or altered content for folks to be able to make sure they’re not being manipulated,” Katz said.

Google said that it will enable the removal of AI-generated or other synthetic or altered content on YouTube that simulates an identifiable individual, including their face or voice, using our privacy request process.



Source link

Leave a Reply