Oscar is a Senior Paid Search Analyst at Wheelhouse DMG. He works out of Richmond, VA and has been on the agency side of digital marketing since 2015. His work interests include making sense of data and understanding the behavior that drives that data, to help his clients make better-informed marketing decisions. When he’s not optimizing clients, Oscar can be found supporting the UVA Marching Band and reading fiction.
One of the core competencies that separates Amazon sellers that can scale from those who can’t is inventory management.
This is especially true for sellers utilizing Fulfillment by Amazon (FBA), a well-oiled process that both minimizes storage costs and maximizes customer satisfaction with healthy stock rates. While walking this tightrope, FBA policies and terms change frequently. These updates can affect seller fees and therefore bottom lines.
Let’s get right to the timely bit. July 1st, 2018 is a mini-Judgment Day for third-party sellers on Amazon. This will mark the end of the per-unit storage limit in favor of a weight-based system. More critically, a new policy goes in effect that utilizes Amazon’s Inventory Performance Index (IPI)—a measure that Amazon calculates for each seller—to determine product storage limits and overall inventory health.
IPI stands for Inventory Performance Index and is comparable to a FICO credit score. Sellers are provided the general inputs that factor into the equation but not the specific formula that outputs the score given the inputs (in other words, a black box). Just as FICO protects their formula from abuse, Amazon retains the IPI calculation with proprietary status.
There are four factors used to determine the score:
If any of these are unfamiliar, Amazon Seller Central breaks it down in detail and provides transparency into each factor for seller accounts in the Inventory Dashboard.
A final score check will be performed on the last day of each quarter starting June 30th, 2018.
Seller accounts who fulfill with Amazon with a score lower than 350 will have a storage limit imposed that will persist through the following quarter, regardless of score improvements in that quarter. Additional overage fees will be charged at a rate of $10 per cubic foot per month. Ouch!
Seller accounts with a score at or above 350 will have no limit on storage (normal FBA storage fees and long-term inventory fees still apply).
For obvious reasons, there are major benefits to having an IPI score above 350. Sellers can avert headaches and growth can continue uninhibited. For those seller accounts above 350, rest easy, though there are still strategies to consider that help maintain good standing and improve IPI score. For sellers who find themselves under, never fear.
For sellers on both sides of the IPI score threshold, understand that there is a lag between when actions are made and re-calculations to update the score. Sellers have reported that inventory takedowns and re-stocks usually update on the same day, while updates to sell-through rate typically only happen once a week. Especially if you’re near the IPI threshold, plan and adjust restocking and adding new products with care. Remember: the IPI score on the date of the final check is the most critical and will have implications for a full quarter with regards to storage space.
For sellers who are on the wrong end of the score, let’s go through how to not just survive but thrive with storage limits.
Updates to Amazon’s seller policy can throw a wrench into what was once conventional practice. The Inventory Performance Index and storage limit changes for third-party sellers are no exception. The only thing to be certain of is that this won’t be the last update of its kind. Don’t let policy be the one thing that hampers progress. Happy selling!
This is a guest post from Oscar Chow, Senior Paid Search Analyst at Wheelhouse DMG.
An ongoing challenge for digital marketers is managing their paid search keywords as efficiently as possible. With the oldest accounts reaching two decades old, it’s not uncommon to find campaigns with unwieldy structures expanding over time. In this post, we’ll go over techniques that can help you optimize your keyword lists for high performance.
Let's step away from marketing for a moment and share some words of wisdom from a branch manager from "The Office" who advises, "You miss 100% of the shots you don't take."
In paid search, keywords are the shots that digital marketers take. If a keyword isn’t being bid on but has relevance and conversion potential, that’s a missed opportunity. Much like a star athlete who creates the best chances for their team to score, a fundamentally sound SEM campaign will bid on keywords that have compelling ad copy matched with a high-quality landing page to generate results.
In SEM, having the right keyword in auction at the right time is still a key element to success, even with new search ad formats and campaigns assisted with machine learning becoming more commonplace. Should we keep spending on this keyword that hasn't converted? How much longer can we wait? Can spend be more productive on this group of keywords?
We've heard real concerns like this from many of our clients at one point or another. Rightfully so, the keyword list is often an area where many apply a fine-toothed comb to find opportunities that could spur growth or create efficiencies.
Let’s take a look at how we can make sense of keyword performance. To aid in the analysis, we’ll address an approach that makes headway but does have a few lurking pitfalls to be wary of. Let's look at an example.
We pull a performance report that screens out keywords that have charged spend over the past 30 days but no conversions. We can title the spreadsheet ‘Inefficient Keywords’.
For starters, we can safely mark (and later remove) keywords that went by previously unnoticed with really high spend and traffic. Obviously, these are poor performers.
As for the others with traffic here and there, doesn’t it seem prudent to remove these "bad" terms and send them to keyword purgatory as well? After all, when you add it all up, it’s a non-trivial amount of spend that didn’t lead to conversion.
But let’s be careful here—by coming to that conclusion, we’re effectively creating a low-light reel. It’s guaranteed that we’ll be disappointed from cherry picking undesirable parts of the data.
Still, it might feel compelling to X-out these keywords. However, because of low sample size and statistical noise, swinging the axe could be premature without further consideration.
To gut-check this reaction, ask the following questions:
1. Has the keyword been given enough of a chance (clicks) to perform?
“Enough of a chance” is going to vary by industry CPC and account. As a rule of thumb, the lower the value per conversion, the lower the spend tolerance should be and vice versa. Stop if the answer is "yes" on this question. You have the confidence to deem the keyword unfit.
2. If there hasn't been sufficient traffic, does performance look any better over a longer period of time?
Time doesn't only heal all wounds—a conversion could be tucked away right outside your set time frame. While you shouldn’t make exceptions common, it’s probably okay to keep a keyword that’s been productive in the past, especially if the window is arbitrary.
3. Would a few conversions drastically improve the keyword's results?
There might be a few keywords on the cusp that would go from stinker to star with just two to three conversions. Keywords with low sample sizes often see high volatility over short time frames because one conversion can drastically skew their conversion rate.
Troubleshooting through these questions should inform whether keeping a keyword active or paused is the right move.
To make this type of keyword analysis more scalable, Marin comes built-in with a proactive solution to managing low volume terms—the Dimensions tool. Dimensions allow campaigns and ad groups to be categorized based on intent. The dimensions can span multiple campaigns, allowing for more data aggregation.
Great categorization has three elements:
Marin’s offering provides all three. Adding descriptive meta-data allows you to cluster low-volume data points into a more representative group, providing a powerful way to make better decisions about keywords on a programmatic scale. In simple terms, we can still make smart decisions with less information than we’d ideally have. Here’s an example of ways you can categorize a long tail, typically low-volume keyword into dimensions to provide more clarity around its value:
Grouping keywords into meaningful clusters, these customizable, client-specific dimensions give us more data to judge. The keyword [garden pruners with 1 inch cutting capacity] may have only spent $10 and not driven any orders over the past 30 days, making it difficult to value. However, if we look at other pruner keywords with no brand name, that have intent for use at home, we can aggregate much more data and assign a relevant bid to this very specific long tail keyword.
The strongest SEM keyword campaigns use practices that put keywords in positions to succeed. You should frequently evaluate your keywords for their ability to be productive at driving results. From time to time, making decisions at the keyword level is warranted to maintain or achieve peak performance. These decisions are best backed by data-driven practices.