Many digital merchants tend to treat product data quality as a kind of ‘mini-project.’ They run what they see as a thorough cleanse, get key attributes in place, tidy up their taxonomies, go live in their PIM system, and assume that the hard part is over. Unfortunately, that’s rarely the case. It doesn’t take long for Product data to start degrading again. New supplier files arrive in non-conforming formats and teams resort to manual work-arounds rather than using the preferred workflows, largely because they’re under pressure to hit launch deadlines. Last but not least, channel requirements never stay still – they shift and thus need amending.
Below, we explain exactly why product data quality starts decaying, what the discipline of continuous improvement should look like in practice, and how to keep your product data standards high as an embedded operational discipline, rather than having to resort to constant ad-hoc clean-ups.
Product data quality degrades by default
Product data is especially vulnerable to drift because it’s constantly in flux. New SKUs are introduced, specifications get updated, products are discontinued, and new channels stipulate new fields. If the business leaves all these issues unmanaged, it’s creating no more than steady entropy.
The predictable reasons:
- supplier data variability, especially formats, units, and naming conventions
- inconsistency of internal data entry
- ageing product descriptions and specifications which are out of date
- new marketplace or channel requirements
- legacy product records which don’t meet current standards
This generalised reduction in quality is gradual, so although individual instances are sometimes spotted, the overall trend isn’t. The operational consequences ripple out: more reworking, more manual fixes, and less trust in the veracity of information in the product catalogue. The commercial impact? Slower onboarding, lower conversion, more returns, and rising cost-to-serve.
A one-off clean-up doesn’t solve the problem
It’s undeniable that a major remediation exercise can be very valuable, for instance, before launching a new PIM or during a re-platforming. It gives the business a cleaner starting point. Nevertheless, without the required structural change, it only amounts to a reset of the underlying situation.
So, it’s here where many organisations are barking up the wrong tree. They invest time, money, and effort in the tidy-up but fail to address the operating discipline which they need in order to keep that data in a continuously clean state. After all, suppliers still continue to submit irregular files, people continue enriching content outside the PIM, and any data governance framework often remains, at best, vague. In a matter of weeks, the same pre-tidy-up issues are creeping back in to put a spanner in the works.
Yes, clean-up work is important but treating it as the solution in itself ends up being costly.
Continuous improvement depends on three things
A sustainable product data quality system rests on three foundations: Governance, process. Measurement.
1. Governance ensures ownership of quality
If nobody is accountable for product data quality, incidences inevitably slip through the gaps. Continuous improvement needs named owners for key areas such as:
- attribute definitions
- taxonomy changes
- supplier data acceptance
- approval gates
- exception handling
This could mean appointing data stewards, category owners, or setting up a cross-functional data council. The actual structure matters less than the fact that ownership is explicit.
2. Clarity of process stops bad data at the point of entry
High-quality product data is the output of repeatable processes, not heroic efforts by harried team members. The most useful controls don’t need to be complex, just crystal clear:
- standard supplier templates
- validation rules at ingestion
- controlled vocabularies
- enrichment workflows inside the PIM
- change control for schema and channel requirements
This is where SKULaunch, our AI-powered onboarding tool, comes into play. If heterogeneous supplier data is normalised as a matter of course before it enters the catalogue, any quality problems are contained upstream rather than having to be corrected later.
3. Measurement turns quality into a managed discipline
If you don’t measure the KPIs, how can you improve it? Having asked that, all most businesses really need is a practical scorecard, not an elaborate and complicated reporting project.
Try tracking a limited set of key measures over time:
- Completeness of mandatory attributes
- Consistency of approved values and formats
- Freshness of records and a record of the last date they were reviewed
- Channel readiness by category
- Supplier data entry conformance rates
- Customer problems linked to product data
Doing this provides greater visibility regarding where quality is slipping and where any remedial action is going to have the biggest commercial effect.
A PIM is essential, but not enough by itself
Technologically, modern PIMs have come along in leaps and bounds. Practically all on the market provide the infrastructure for continuous improvement in areas like:
- Workflows
- Versioning
- Validation
- Audit trails
- Channel-specific models
But PIM won’t perform miracles in keeping quality high.
Without adequate governance, a PIM runs the risk of becoming little more than a storage layer for bad habits. Without the underlying process, it can even be a bottleneck. Without measurement, it becomes a ‘black box.’
Technology is a control enabler, but it doesn’t replace the human steps needed to establish that control in the first place.
Build in the right cadence, not a periodic rescue mission
Those organisations which maintain consistently high product data quality levels don’t wait until there’s a crisis. Rather, they build a review rhythm into their standard operating procedures.
So, a practical cadence might include the following elements:
- Weekly automated checks for missing fields, duplicate records, and validation failures
- Monthly category reviews to assess accuracy, consistency, and channel readiness
- Quarterly analysis of returns, support queries, and listing failures to identify recurring data gaps
It works out far cheaper than carrying out periodic large-scale remediation. In fact, regular maintenance will spread the effort as well as preventing the need for operational firefighting (not only disruptive, but also highly stressful – think of your people!)
Why this makes sense commercially
Bad product data isn’t just untidy. It degrades your commercial performance. It’s the root cause of suppressed listings, broken filters, confusing descriptions, avoidable customer frustration, and a reputation for unreliability. It also slows every downstream initiative, from expanding the range of channels to fully leveraging the power of AI-generated content.
High-quality product data, maintained over time, means:
- faster product onboarding
- fewer manual corrections
- stronger channel acceptance
- lower return rates
- better search and conversion
- more reliable AI content generation
That’s why continuous improvement, far from being an exercise in admin, is a commercial control factor.
The strategic view
Product data quality is not something you ‘get right’ once. On the contrary, it’s something you need to maintain, even nurture! Those merchants who perform well in the long-term treat data quality control as an operational discipline: consistently stabilising the inputs, standardising the process, and enforcing the rules.
Next step
If your catalogue keeps slipping back after every clean-up, the issue is not effort. It is an operating model. Get in touch with us today at Start with Data about embedding the governance, onboarding controls, and improvement in cadence which you need to keep product data quality high for the long term. We’ll also give you the low-down on how our onboarding tool, SKULaunch, can help you to control supplier data before it degrades your catalogue.