Emerging themes in product data acquisition and onboarding

In recent posts we explored the importance of fast and efficient product data acquisition processes, and how getting them right helps your business improve time to market and customer experience. There has never been a greater imperative to make sure your online product content stands out from the crowd. It needs to be compelling, comprehensive and complete to grab customers’ attention as well as clearly demonstrating the value proposition which drives purchase conversions.

So, when it comes to product data onboarding, what’s happening and, more importantly, what’s going to happen in the near to mid-term future? We’ll give you a broad overview of three emerging themes . Firstly, the issue of unifying data onboarding for retailers and distributors. Secondly, how we can adopt widely accepted data standards to make sure we have the quality needed. Finally, how to best leverage the data sources which reach those standards and enhance our product data.

Unification of data onboarding for the retailer or distributor

Historically, the approach to acquiring data from suppliers has been laborious, inconsistent and not especially collaborative. With the enormous demand (and need) for product data from suppliers, a constant headache for many retailers and distributors is the backlogs created by the simple fact of not being able to keep up with the speed and quantity of incoming product data. 

The Answer?

A more unified approach is needed than currently exists. That means suppliers and distributors or retailers working hand-in-hand to collaborate in streamlining and simplifying the onboarding process by leveraging technological innovations. 

Retailers and distributors require a great deal of information about each product they onboard from a range of sources, primarily suppliers. These sets of data may arrive in different formats, which is why it is critically important to be able to unify all these types of information under one single acquisition platform. It is feasible to streamline the product data verification and standardisation processes as long as both sides understand the requirements and functionalities of the platform used. At the end ofd the day, it will save both sides time, make life easier from a resources perspective and enhance the relationship between partners. Of course, the key to success is a high degree of collaboration among all stakeholders.

Data collection has traditionally been performed by product listing teams, but the large volumes of different types of information related to products means a far more cross-departmental, ‘holistic’ approach is needed. Critical product information can now involve elements as diverse as logistics, stores, digital, eProcurement and marketing channels. Furthermore, we are commonly talking about diverse types of data, such as pricing data, sourcing data, product data, stock data and lead time data.

Data mapping and normalisation

Through the inter-partner, cross-departmental collaboration mentioned above, alongside the use of dedicated onboarding software, this ‘bottleneck’ problem is significantly alleviated. You are able to gain comprehensive oversight of the product data flow from source to destination.

The ‘clearly-established sets of standards’ mentioned above apply in part to effectively mapping incoming product data;

The Answer?

Centralise and automate your entire incoming product data flow so that every area of the business knows the location and standardisation protocols. 

Use an automated classification system. You can use flexible product attributes related to each category. 

Dynamic product data modelling is advisable for instant updating of mapping parameters, as it permits changes to product attributes and product types. Product data is often in flux.

Adopting data standards to ensure data quality

Compared with twenty years ago, there are many more external data model standards available to assure a certain level of quality. Predefined models exist for specific industry verticals like FMCG or industrial distribution, and these are provided by organisations such as GS1 and ETIM. If retailers and distributors adopt these standards, they can share data much more easily and can guarantee that it has a base level of quality assurance. Using these standards also make it easier to scale your offer and increase the range and type of products offered.  

The Answer?

In a few words, the key is to leverage pre-build connectors, templates and mapping tools for adopting & mapping data standards into a PIM or MDM platform. The creation, management and onboarding of all file types (for example, BMEcat) can easily be simplified and parts of it can be automated. The most important initial move is to do a deep dive into the requirements of your system infrastructure so that the solution you choose can be integrated with all your existing tools (like PIM and ERP). 

Different business users will obviously have different product data requirements. An ideal solution is a platform which lets users create templates which are uniquely tailored to fit whatever the requirements are. Thus, rather than having to manually tailor each catalog in line with a given buyer’s requirements, the platform will map your existing data automatically (from whatever source: PIM, shop system, ERP, or MDM system) and funnel it into a specific catalog for the target buyer. It’s also easy to change discrete elements individually (for example, prices) or create rules to automatically change them for you.

Leveraging content enrichment and data pools

Perhaps the most common sources for product data are the large content databases known as data pools. The biggest is the Global Data Synchronization Network (GDSN), which includes the GS1 Global Registry. This provides the framework for the GS1 standards used in several industries. There are also other notable data pools, such as Syndigo, Kwikee and Icecat. 

Data pools exist to help businesses find the kind of details needed to list and sell products in a format acceptable to major resellers and marketplaces. They receive and publish product content from various sources and are useful tools for channel strategy development, given that they allow companies to publish and receive product information all in one location. 

Their function is to act as ‘recipients’ for the kind of basic product information frequently required by retailers and brands – things like attributes (colours and brand names) and as key details for supply chain management such as handling codes or precise dimensional measurements. This pool of data is verified, standardized and ready for use by companies with access to it.

These data pools are particularly useful when it comes to industrial distribution – for sourcing highly technical products where specific and accurate details are crucially important but not always easily available. 

Of course, you don’t always want to rely on the data from the vendor feed alone. Unless you have a long-standing, clearly-defined and trusting relationship with a vendor who can consistently provide you with sets of product data which are as complete, correct and, hence, reliable, you will probably have to access information from one or more of the following sources. 

Content Service Providers (CSP)

Also known as Content Catalogues Providers, CSPs assist brands, retailers and distributors in simplifying onboarding processes by creating, enhancing and distributing product information data. They can aggregate, enhance and develop product catalogs with rich media content which is compatible and compliant with portals, platforms and marketplaces in terms of high-quality and relevant product information. After all, the better the information is, the more likely it is to motivate consumers to buy.  

Moreover, some CSP provide visual asset enrichment services, particularly useful for SMEs or start-ups, who need help to quickly gain market recognition. 

Vendor/Supplier direct

If the distributor or retailer has an integrated relationship with supplier data feeds, they may well be receiving product data updates on an ongoing basis. This always supposes such a relationship has been built up, and probably works best for larger distributors with a clearly established product data governance framework and not too many suppliers. Nevertheless, the capability exists to onboard product information files and integrate them with a variety of sources, including PIMs, ERPs, or Data Asset Management (DAM) systems. 

Content Creators

External companies specialising in the creation, collection, and enhancement of product data. Increasingly popular in sectors like clothing and furniture retailing, utilising AI developments like augmented or virtual reality to create data assets to stand out from the competition. Content aggregators, like iSyndicate can also provide add-on product information by trawling for sector-specific content from various online sources.  

Catalog providers

These services allow users to manage most kinds of file structure. They also provide functionalities for users to create files in different formats, applying a drag-and-drop editor. Changes can be applied in bulk, so a retailer can adapt all or parts of existing catalog for different channel requirements. For example, automatically shorten overlong texts or change keywords to fit a required set of schematics

The Answer?

The simple solution is to ensure your organisation has data pool connectivity. As we’ve seen, the choice and variety of data pools is sufficient for any sector, be it B2B or B2C. What users will be able to do is facilitate the acquisition of data from a lot of suppliers at once. This largely automated process makes it easy to extract and integrate up-to-date information.

The main take-away regarding acquisition of supplier or vendor data is the need for a mutually supportive relationship between the two partners. Both partners can leverage the capabilities of software platforms to ingest and maintain standardised product data sets. Furthermore, if the relationship is underpinned by a common and shared understanding of the requirements for mapping incoming data, it makes the jobs of both organisations easier.