+++ title = “Exclusive: Google Maps’ Controversial New Vibe Feature Revealed - Is it Reliable or Biased?” date = “2021-10-29” author = “OpenAI” tags = [“Google Maps”, “Vibe”, “Controversy”, “Reliability”, “Bias”] +++

Google Maps is a popular navigation app used by billions of people around the world. Recently, Google Maps introduced a new feature called Vibe that provides users with a crowd-sourced view of the atmosphere of a particular place, such as whether it’s lively, quiet or romantic. While the feature has received mixed reactions, the controversy surrounding it mostly stems from concerns about its reliability and possible bias. In this article, we’ll take a closer look at Vibe to see if it’s worth using.

First, let’s understand how Vibe works. The feature is based on feedback from Google Maps users who have visited and rated a particular location. Users can leave a review with a wide range of ratings, from “very lively” to “very quiet”. These ratings are then aggregated to give an overall “vibe score” for a location. Vibe is designed to provide a picture of the atmosphere of a place, irrespective of its other attributes. Vibe helps users looking for a particular type of environment, whether it’s a quiet cafe, a loud bar or a romantic park.

However, not everyone is convinced that Vibe is a reliable feature. Critics argue that it’s highly subjective and depends on the personal perception of the reviewer. Some users might find a place “very lively” while others might consider it “too noisy” or “too crowded”. Likewise, some users might find a location “romantic,” while others might find it “boring” or “overrated.” Therefore, the Vibe feature may not always accurately reflect the atmosphere of a particular location.

Moreover, there are concerns that Vibe could get biased feedback. For instance, some users may be more likely to rate a location highly if it’s popular, highly-rated or trendy. Moreover, users with similar demographics or interests might view and rate locations differently, leading to regional or cultural biases. These concerns could result in a skewed view of a location’s vibe, which could be misleading for users.

Despite these criticisms, Google maintains that Vibe is a valuable feature for users. According to Google, Vibe provides a useful glimpse into the atmosphere of a particular location. Users can use it as one piece of information while making decisions about where to visit. It’s also worth noting that Vibe is still in the early stages of development, and Google is continually refining the feature.

In conclusion, Google Maps’ new Vibe feature is a double-edged sword. While it can provide users with helpful insights into the ambiance of a particular location, it is also subject to reliability and bias concerns. Ultimately, Vibe remains a useful tool for users when combined with other factors such as location, price, and reviews. Whether Vibe is reliable or biased, deciding to use it or not is up to the user’s decision.

A new Google Maps feature is intended to help you get a “vibe” about where you’re going, but the technology could be prone to bias. 

  • Google says it plans to roll out a new feature to its Maps apps that give users the “vibe” of a neighborhood. Some experts say that the feature could lead to bias. One observer says that places of interest highlighted are more likely to be in gentrifying neighborhoods.

Neighborhood Vibe works by showing user reviews as you’re panning through the area. Another new feature also allows users to see how busy a neighborhood might be, based on Google’s crowd-level data from that business, and what the weather may be like on any day they’re planning to arrive. While the new Maps update hasn’t rolled out, some experts see the potential for trouble.. 

“It’s standard practice for computer scientists to continuously improve AI models based on new data,” Daniel Wu, a researcher in the Stanford AI Lab and cofounder of the Stanford Trustworthy AI Institute, which focuses on technical research to make AI safe, told Lifewire in an email interview. “What that means is, as Google rolls this feature out, they’ll likely be training the model to show reviews that more people click on or find useful. But this can lead to a biased sample of reviews.” 

Whose Vibe?

To determine the vibe of a neighborhood, Google says it combines AI with local knowledge from Google Maps users who add more than 20 million contributions to the map each day—including reviews, photos, and videos. 

“Say you’re on a trip to Paris—you can quickly know if a neighborhood is artsy or has an exciting food scene so you can make an informed decision on how to spend your time,” the company wrote on its blog. 

Herve Andrieu, a Google Maps Local Guide, who doesn’t work for the company but runs a private website on the subject, said in an email interview that maps users provide data by informing Google Maps of where they want to go and sharing their location at the very least when using the app. There are also contributing users who provide extra information. 

Andrieu said that bias might arise with established existing points of interest. “The algorithm will necessarily keep recommending the most popular spot, which in turn will always attract more users, which in turn proves the AI to be correct,” he added. “I am wondering how ’local gems,’ i.e., lesser known, less frequented spots, will get a chance to appear.”

The Vibe feature “can lead to biased results when places of interest highlighted are more likely to be in gentrifying neighborhoods or predominantly in affluent areas, while restaurants and establishments operating in primarily minority neighborhoods (or minority-owned businesses) are less likely to be so highlighted,” Anjana Susarla, the Omura-Saxena Professor in Responsible AI at the Broad College of Business at Michigan State University told Lifewire via email.

“Neighborhood vibe highlights popular spots in an area based on contributions from the Google Maps community - a diverse set of people with different backgrounds and experiences,” Google spokesperson Genevieve Park told Lifewire via email. “When it launches, it’ll be available for all neighborhoods around the world, making it easy to see a range of popular places at a glance - from local gems to newer establishments. As always, we take multiple steps to ensure that Google Maps accurately reflects the real world.”

Preventing AI Bias

Modern AI employs a general technique known as deep learning, where these features can be automatically inferred and extracted from the underlying data without the need for a researcher to select them by hand, Flavio Villanustre, the global chief information security officer for LexisNexis Risk Solutions, told Lifewire in an email interview.

In this process performed on systems such as Google Maps, deep learning and researchers probably identified features that make a neighborhood reputable, desirable, or trustworthy and established a certain correlation with specific characteristics.

“For example, higher levels of poverty could correlate with the proximity to clusters of fast-food chain restaurants; higher income populations may reside closer to luxury stores,” Villanustre said. “But while doing so, if the data is not normalized by protected classes of individuals (e.g., skin color, religion, ethnicity, gender, etc.), it’s quite possible the resulting model will leverage proxies to these classes, as it infers ‘desirability.’ Some of these proxies can affect the results of the model and those protected classes in a negative manner.”

Nabeel Ahmad, a professor at Columbia University in Human Capital Management, told Lifewire in an email interview that bias in AI cannot be entirely prevented. Instead, developers can take steps to reduce bias in AI. 

“First, use multiple data sources to reduce over-reliance on any single data source,” Ahmad said. “Second, have a governance system of people who define what the AI model should be doing (i.e., parameters to take into consideration, etc.), what its expected output should be, and routinely run tests to check how accurate the AI results are to expectations. Last, make adjustments over time as needed to fine-tune the AI so that it provides more accurate and useful results.” 

Update: Bias in AI is a constant concern because large datasets are not necessarily representative of what they’re supposed to reflect, Irina Raicu, the director of internet ethics at the Markkula Center for Applied Ethics at Santa Clara University, told Lifewire in an email interview. 

Given the complexity of the non-digital world, “large amounts of data” can still mean “incomplete and inaccurate,” Raicu said. “Bias can be expressed even at the level of what we choose to measure–what we choose to turn into data–not just by not including certain variables (or people) in a data set in representative numbers, but also by not developing certain datasets at all.”

Correction 10/7/22: Updated paragraph two for clarity and paragraph nine to include a response from Google.

Get the Latest Tech News Delivered Every Day