ChatGPT gets scary when it can guess location from photos

ChatGPT’s advanced location prediction capabilities have sparked new privacy concerns online.

Recent reports from TechSpot indicate that sharing photos on social networks, while typically viewed as innocuous, poses increased risks due to the evolving abilities of ChatGPT. The latest models from OpenAI, particularly o3 and o4-mini, have demonstrated a remarkable capacity to accurately predict the locations associated with images, raising serious concerns about privacy and the potential for doxxing—revealing personal location information.

The online community is currently engaged in a trend of testing ChatGPT’s geolocation skills by submitting various images, such as restaurant menus and selfies, challenging the AI to identify their locations akin to a game of GeoGuessr. The results of these challenges have been strikingly accurate. While models like GPT-4o, which lack the advanced “image inference” capabilities of o3, also perform well, o3 has outperformed in specific scenarios. For instance, it has correctly identified a particular bar in Williamsburg from an image of a distinctive purple rhino head, while GPT-4o misidentified it as a pub in England.

This newfound proficiency in location identification carries significant risks. By analyzing minor background details in selfies, corners of rooms in social media stories, or glimpses of windows that may go unnoticed by the human eye, ChatGPT can infer geographic locations. This ability raises concerns about the potential for doxxing, where malicious individuals may intentionally seek out and disclose private details, such as a person’s home address or workplace, posing threats to safety and privacy.

OpenAI recognizes the implications of this capability, yet emphasizes the potential positive uses of their technology, such as aiding individuals with disabilities, conducting scientific research, or assisting in emergency situations. The company insists that their models are designed to reject requests for personal or sensitive information and include safeguards to help protect individuals’ identities in photographs.

Despite these precautions, users are advised to exercise caution when sharing images publicly. The distinction between a typical photo and an accidental revelation of personal location has become increasingly tenuous in the age of advanced AI technology.

Related posts

Microsoft warns of serious error on Outlook

Microsoft Adds AI Process Automation Capabilities to Copilot Studio

User sues Apple for $5 million over ‘betrayal’ security feature