Zillow utilizes explainer AI, data, to revolutionize how people sell houses

Join executive leaders at the Data, Analytics, & Intelligent Automation Summit, presented by Accenture. Watch now!


Zillow has been a big name for online home seekers. There have been more than 135 million homes listed on the platform, and the company has streamlined the real estate transaction process from home loans, title, and buying. Its success in providing customized search functions, product offerings, and accurate home valuations, with a median error rate of less than 2%, have been thanks to the power of AI.

Zillow’s initial forays in AI in 2005 centered around blackbox models for prediction and accuracy, Stan Humphries, chief analytics officer at Zillow, said at VentureBeat’s virtual Transform 2021 conference on Tuesday. Over the past three or four years, as Zillow started purchasing homes directly from sellers, the company shifted towards explainable frameworks to add context that while still getting the same levels of accuracy from blackbox models. “That’s been a kind of a fun Odyssey,” Humphries said, noting that the results needed to be “understandable and intelligible” to a consumer in the same way as if the conversation was with a real estate agent. Zillow took inspiration from Comparative Market Assessments (CMAs), which are estimated appraisals of the property provided by realtors, to create an algorithm analyzing three to five similar homes.

“Humans can wrap their heads around [that], and say, ‘Okay, I see that home’s pretty similar, but it’s got an extra bedroom, and now, there’s been some adjustment for that extra bedroom,’ [compared to] a fully ensemble model approach using a ton of different methodologies,” Humphries said.

The move into explainable models was helpful for consumers to understand the value of their home, but also to inspect it and get a “gut check” relative to their own intuition, Humphries said. Now that he’s seen that it was possible to have “the best of both worlds” with accuracy from blackbox models and intuitiveness from explainable models, Humphries said that he wished Zillow had shifted approaches sooner.

Improving the appraisal model

Zestimate, the AI tool which allows Zillow to estimate market value for homes, has largely improved thanks to utilizing the information provided by its progress.

“We think about our gains that we’re going to make as being data-related, [which include] getting new data features out of that data or their algorithm, or algorithm-related, which is new ways to combine and utilize the features that we’re doing,” Humphries said.

Zillow only used public record data for Zestimate in the past but now incorporates information associated with previous sales of comparable homes. By utilizing natural language processing, Zestimate can pull information about what people wrote and said about the property when interacting with Zillow’s representatives. Another rich source of data has come from computer vision, to mine data out of all the images associated with the homes. It makes sense. People look at the appraisals and then look at homes and make judgements about which house looks nicer. Zillow had to teach computers to do that same type of work, Humphries said.

In February 2006, Zestimate required 35,000 statistical models to estimate the market value of 2.4 million homes. Now, the tool generates 7 million machine learning models to estimate 110 million homes nationwide.

“There’s been a lot of algorithmic advances in what we’re doing. But behind the scenes, there’s also been a huge amount of additional data that we take in now that we just didn’t back then,” Humphries said.

Zillow recently announced a new release of the Zestimate algorithm, version 6.7. This update introduces a new framework that leverages neural networking within the ensemble approach, making the algorithm much more accurate, decreasing Zillow’s median absolute percent error of 7.6% to 6.9%.

Zillow’s AI journey

The company’s technological innovation has to strike a balance between consumer interest and technological limitations. The team thinks about the pain points for consumers and the products to solve those challenges, but also has to consider what can actually be built. In the case of Zestimate, the context behind the appraisal was what the customer was asking for. The representatives using the tool didn’t ask for insights generated by natural language processing and computer vision because they didn’t even know that would be possible, Humphries said.

Currently, the company is working on having users close their own deals with a human agents. The goal is to have this evaluation eventually be completely machine-generated.

“The customer is kind of our North Star,” Humphries said.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Leave a Comment