Beyond Logos and Patterns: How We’re Training eBay’s AI to Understand Brands

We’re researching how to recognize brands using computer vision by training our AI to look beyond logos and iconic patterns.

Think of your favorite brands: How do you recognize them when you are shopping? Maybe it’s that iconic swoosh on your favorite sneakers or that distinctive plaid on your handbag or that apple on your phone. Brands express themselves in various visual forms.

At eBay, we’re researching how to recognize brands using computer vision. We’re training our AI to look beyond logos and iconic patterns to dial in on the unique characteristics that brands use to create specific items. We’ve compared our deep learning model to human understanding in an experiment to validate visual perceptions of brands.

As your brain evaluates a shoe or a bag, it’s taking in and processing all sorts of information—from the style to the pattern to the fabric. It makes a decision with its best hypothesis based on a variety of factors and insights learned over time. These are some of the elements that make up brand recognition. And when you’re shopping, this recognition is one of the key steps to finding the perfect item since brand encapsulates such rich information.

We set out to understand how to identify brands visually by targeting unique designs, patterns, stitching or hardware. We also wanted to understand how deep networks distinguish between similar products and to compare our analysis to human perception.

We investigated how our deep learning models build internal representations of brands and we examined how those representations vary over products. This allowed us to further understand the classification path that AI uses. We are using these representations to analyze visual embodiment of brands at large scale and to find the key characteristics of a brand’s visual expression in a single product, a brand as a whole and across categories.

It’s important to note that one of the reasons eBay’s computer vision models are so powerful is because we are training the models on varying qualities of images—from professional or stock photos to amateur, dimly-lit photos with complicated backdrops. Our dataset consists of 3,828,735 clothing products from 1,219 brands spanning across a wide range of clothing types. These are real-world ecommerce images from our catalog. For every product, we collect an image, title and a set of attributes from which we extract the brand information. 

After training the model on a given image, we get outputs for probabilities of what brand is the most likely featured in the image. Then, we follow where the neurons at each layer are focused to make their decision through “attention” maps. Our goal is to visualize and interpret the deep model’s decisions in order to explain the visual characteristics of fashion brands. For instance, we saw that our AI was recognizing the three-stripes signature on Adidas products instead of the logo.

As part of our work to understand brands, we’ve also analyzed our AI at the neuron level to get insights into different visual indicators for fashion brands. In our deep learning model, neurons collect and process information by classifying unique characteristics and assigning it to the most likely brands. We’ve created attention maps to interpret the visual characteristics of a brand present in a single image and model the general design direction of a brand as a whole.

Even more exciting, we observed that certain neurons diverged into experts or generalists when learning over time. Some neurons leaned toward specialties and became decision makers while other remained generalists. For example, once a neuron learned to identify a color like purple and a pattern like paisley, it was more likely to be called on to identify purple and paisley characteristics in the future. This is important to answer which decision-makers in our neural networks make certain judgments and helps us come closer to answering the all-important question of why certain neurons in deep learning models become decision-makers and gain more authority over time.

Our work analyzes the deep neural network model to understand and characterize the types of neurons that help predict brand from a single image. Brand prediction beyond logo is an example of narrow AI—or a place where AI is more efficient than humans at a specific task. Taking a step back, AI technology should be explainable since we use it to make big decisions.

As we look to the future, this application of AI and our understanding of these neurons is paving the way for answers to specific and important questions that will help reduce bias, sharpen personalization and ultimately to the improve our recommendations. By understanding brand characteristics, we can further cater the shopping experience to individuals and serve them up a truly tailored experience where everything they see is personalized to them.

eBay is not affiliated with or endorsed by Vera Bradley or Adidas.

For a more in-depth look into our computer vision efforts, read our latest Tech Blog.