Hong Kong’s privacy watchdog has started a criminal investigation into a serious scandal involving AI-generated pornographic images. This case involves a student at Hong Kong’s top university who allegedly used artificial intelligence tools to create indecent images of women without their permission.
Hong Kong’s Privacy Body Steps In Amid AI Photo Scandal
The Office of the Privacy Commissioner for Personal Data (PCPD) issued a statement confirming the launch of the probe. It said the incident is now under criminal investigation, and they are not commenting further at this time due to legal reasons.
This announcement comes after disturbing reports that a male student at the university created explicit fake images, known as “deepfakes,” of 20 to 30 women. The women targeted were reportedly classmates and teachers at the university. The student reportedly took their regular photos from social media accounts and uploaded them into free AI tools that generate fake nude or sexual images.
This has caused shock and anger in the city, especially among students and teachers. People are now questioning how others can misuse personal photos shared online through technology like artificial intelligence.
AI Deepfakes Made Without Consent Spark Legal Alarm
The fake images were created using AI technology that is easily available on the internet. These tools can take normal photos and change them to look like realistic pornographic pictures. In this case, it is believed that the women involved did not know their images were being used in this way.
€530M Fine Wasn’t Enough? Ireland Opens New Case Against TikTok’s China Ties
According to Hong Kong’s privacy laws, it is illegal to share a person’s private information without asking them. This includes photos and personal data. If the person sharing the data causes harm to someone or their family, or acts recklessly about the harm it could cause, the law may consider it a criminal offence.
Hong Kong’s Chief Executive also commented on the matter. The matter should be handed over to the police or other law enforcement agencies.
University Faces Criticism Over Internal Handling
The university involved issued a warning letter to the student. They also asked him to apologize formally to the women whose images were used. However, many people are questioning if this response was strong enough.
The university did not handle the case through its Disciplinary Committee. According to reports, university officials told three of the victims that the student likely did not break any rules that the committee could act on. This decision has led to criticism from students, teachers, and the general public.
Many are now asking how universities should deal with AI-related harassment and whether their current rules are enough to protect victims. The fact that this scandal involved free tools available to anyone has raised serious concerns about technology misuse and student safety.
Columbia University Crippled by Mysterious Cyber Outage—Fears of Major Hack Loom
The criminal investigation is still going on. The Privacy Commissioner and law enforcement plan to look deeper into the case. They want to find out how the student took, misused, and shared personal data.
This is one of the first big AI deepfake scandals in Hong Kong. Authorities are treating it as a criminal matter. The case has shocked the public. It has started a debate about privacy and technology. Many people are worried, especially in schools and universities.