If you have an ETSY shop or an online store, then you know the importance a product’s photo quality and its effect on sales. We know now that the quality of the photo is the biggest driver of its sales, even more than its tags, price, or customer reviews. But how do you go about choosing which photo to use in order to promote your product? Should the image be that of a standalone product or product in action? Should it be a detail shot that shows off your craftsmanship? If there was an empirical way to learn which photo will drive more traffic — and therefore conversions — to your shop, would you use it? (Spoiler: there is.)

whatify-boost-etsy-traffic

Time for a small experiment

The two photos below depict the same pair of sandals. However, one is photographed on a model and the other is clicked in a plain background.  Which image in your opinion is more appealing and will drive more traffic? Take a guess before you continue reading.

 

The correct answer? Our test show that the plain background image generated 30-50% more traffic than the one with feet. The seller had chosen the image with feet as the primary image.

Let’s try another experiment with a different product. Take a look at these images of a necklace. Which image in your opinion will drive more traffic in this case?

Applying the same logic as the test above, perhaps the photo that shows this necklace without a person modelling it will attract more clicks? You might be surprised to know that the results this time were exactly the opposite: 20% more traffic for the photo on the right.

As humans, we are always looking for rules and systems to guide our decisions. The truth is, some rules work better than others; and the best rules too have exceptions. For example, photos which feature products in real-world contexts are more successful on average, but tests have shown that there are exceptions to this rule. We’ve also learned that the only way to pick the best photo is to try a few and see which one works.

After having tested tens of thousands of photos, there is one thing we can vouch for: It is really, really hard to predict which photos are best without running a test!

Test! Test! Test!

Modern tech companies such as Google, Amazon, Facebook, and Netflix; researchers around the world; as well as regulatory agencies like the FDA are all using the same statistical technique that we use called the “A/B test”. There is great value in making decision based backed by data, so much so, that consultants charge tens of thousands of dollars to run and analyse randomised experiments for large businesses. Whatify makes these professional techniques available to everyone. We are experts at running and analysing A/B tests and we want to share our expertise with you so that you can increase sale conversions and build a more successful business.

How it works

Let’s take a closer look at how A/B testing works. This might get a little technical but it is important to get clarity on the subject as it is fast becoming a required part of toolkit for most businesses.

Think of A/B testing as a systematic way of “trying different stuff” in a way that is formulated to produce effective results. A/B tests address specific problems that might otherwise affect an experiment’s findings:

1) Confounding Variables: If you try photo A on Wednesdays and photo B on Saturdays, you might find that photo A is better. But is this really because of the photo, or do you just get more traffic on Wednesdays than Saturdays?

2) Noise: If you test two photos and photo A gets 13 views while photo B gets 16 views, does that really mean that photo B is better or was this due to chance? If you thought that photo A was better before the test, is 16 vs. 13 enough evidence to overturn your original judgment? Can we be sure that photo B is not performing better due to luck?

We’ll look at confounding variables first. In a perfect world, everything would be exactly the same when you test photo A and photo B. You could try testing both photos on Wednesdays, but you also want the time to be constant. You could try testing both photos on Wednesdays at 8:30 pm, but now you have test the photos on separate weeks (you can’t simultaneously make both photos primary!), and calendar date might matter too. Trying to hold everything constant seems impossible.

The way that A/B testing solves this problem is by essentially “flipping a coin” to decide which photo will be primary at any given time. This might seem counter intuitive — how does randomizing the test hold things constant? The trick is that if you flip a coin every few hours to decide which image is primary, then on average all the other variables cancel each other out. You won’t end up testing the two photos on exactly the same days or exactly the same times, but you can be certain there are no systematic biases.

With confounding variables out of the way, we arrive at the issue of noise. Once we know there are no systematic biases, we use statistical methods to figure out if one photo might perform better due to random chance and incorporate that into our analysis — AKA we make sure that one photo isn’t just getting lucky. We also combat the chances of luck creating false results by running the test for an extended period of time. If a photo performs well over a long period of time, it is less likely that its results are a product of luck. We combine all of these factors to determine whether all of time data together suggests that photo A or photo B will more successfully convert sales.

What were the results, you ask? An increase in sales.

In order to determine how much our recommendations increase sales for your shop, we generate our recommendations using only half of the available data. We use the other half as a “control” to learn how many more views our recommendations acquire in comparison (for my statistical gurus out there, this is called “cross-validation”). If the “winners” we pick on the first half just “got lucky,” they won’t out-perform other photos in the second half of the data. On the other hand, if those winners really are better, we should see that they continue to generate more traffic in the “unused” half of the data. If that’s the case, we can be confident the estimated improvements we generate are not due to false positives.

Want to know more?

Head over to Whatify.com to learn more about what we do.
If you’re interested in learning more about how A/B testing is used in the business world, check out this article. If you love the mathematical aspect and want to dig into the details, try this textbook.
And you’re just interested in using A/B testing to increase traffic and conversions to your Etsy shop by 5-25%, then try signing up for Whatify. (It’s free.)


Disclaimer: This post was written and supplied by Jake over at Whatify – The Makers Co received no payment for this post (I just think it’s a cool thing to share!)

Jake Phillips

Jake Phillips

Guest Blogger

Jake is a passionate problem solver who is driven to use data and technology for good. He has worked with the New York City Department of Education to improve public high schools and founded a nonprofit to educate incarcerated individuals at Rikers Island. Jake holds an M.B.A. from Yale University and a B.A. in Economics from Brown University.

If you're ready to take your business idea and make it a reality, it's time to become a member of The Makers Academy. 

I give you the tools to plan, launch and grow your idea into a thriving business so you can become self-employed and stay that way.

Cut through the bullshit so you can start creating your dream business, TODAY

I want to know more