Select Page

The Challenges of Connecting Multiple Data Sources in Akeneo PIM

December 3, 2021

With all PIM Implementation there is an ideal scenario. It’s simple and it follows the plan, but let’s be realistic: the complexity changes based on the details and our world is less than perfect. These challenges often add steps and options but they aren’t anything that can’t be handled with Akeneo and our expertise.

Data Sources for a “Vanilla” PIM

Traditional PIM data flows often contain 3 major systems:

  • ERP
  • PIM
  • E-Commerce Site

In these ‘Vanilla’ pim implementations, data governance is relatively straight-forward. Products are born in ERP, that data flows into PIM where attributes are marked as read-only and often grouped in an ERP attribute group, and then when a defined number of PIM required attributes are enriched the complete product flows downstream to a single destination. 

The most common level of complexity that shows up for clients is multiple channels of data, which results in the creation of localizable and scopable attributes allowing for a single attribute to have multiple values. Added complexity in data usually leads to increased complexity in downstream exports, but that filtering is straightforward with proper pim configuration.

Multiple sources of data flowing into the PIM can lead to headaches, lost data, and worst of all, poor data quality and low trust in the system. Most of these pain points can be avoided by establishing clear ownership of data on an attribute group specific level. 

For example, let’s say you have a PIM with four sources of product data:

  1. ERP (includes Name, SKU, ManufacturerID, Manufacturer, ERP Description)
  2. Translations Service (Own Spanish locale for all localized attributed in marketing team’s attributes ES_US)
  3. Marketing Team (Long Description Product Specific Attributes including color, screen size, hard drive size, memory in English Locale only EN_US in all product categories)
  4. DAM (product images, main image, manuals)

The four data sources (two teams and two automated imports from a DAM and an ERP) will have zero data collisions. In other words, because the value for the attribute can only be filled by one of the sources, there is never a question of the origin of the data or who should be tasked with cleaning it.

Introducing Greater Complexity to the Data Model

In an effort to improve data quality quickly and assist the marketing team, what happens if we add a subscription to pre-enriched product data? Let’s add a new feed, this time coming form 1WS for a subset of their products in the computer space in United State English locale only. What would that do to the data ownership?

  1. ERP: No change
  2. Translations Service: No change*
  3. Marketing: Significant change
  • Both the Marketing Team and 1WS would be updating these values
  1. DAM: No Change

There are a couple of ways to handle the potential collisions of the Marketing Attributes, but it is worthwhile to pause here to acknowledge that this scenario is already outside of a perfect world. Splitting responsibilities by category, locale, attribute group, or any of the other options in Akeneo is a cleaner way to handle this challenge. But reality is far from perfect, so my recommendation would be to take one of the following approaches:

  1. Prefer human data to enriched 
    1. This puts some extra work on the integration team pushing data from 1WS into Akeneo, but it would first check to see if an attribute had an existing value. If there was a value, the product value for that attribute would not be added, to ensure human enriched values are preferred to an import.
    2. The trade-off here is if 1WS data is more recent and/or better, it would be lost. Furthermore, because it was never written, there would not be a record in the history in Akeneo on the individual product.
  2. Prefer latest update
    1. Regardless of 1WS or human enriched, the latest data flows out. This would treat a 1WS update as any other user in the system.
    2. The trade-off here is a user could have their content work overwritten. This may not seem like a big deal, but on a large scale basis preferring an automatic import over human enrichment can lead to distrust in the system and less clean data. AI and other teams are fully capable of having quality data, but I would prefer human to automated in most scenarios. That said, the Akeneo history would be available here for a user to audit on a single product basis to see where the higher effort lies. 
  3. Automate initial load of values, then lock out connector
    1. This is similar to option 1, but it would allow for an initial write of product values on a product create, however, once the product data was written once successfully, it would not allow for additional writes. 
    2. The trade-off is you would get the benefit of only the initial data in 1ws. If there was a large scale update of product values of a product after that initial write, that data would not be saved. A user could be confident that if there was 1WS data in place, their manual enrichment would be protected.
  4. Create attribute overrides:
    1. Every attribute in this scenario could have a 1WS owned attribute and a corresponding override attribute that would only be human owned.
    2. Trade-off: Massive duplication effort and if there was a poorly enriched human attribute, it would override the 1WS value every time.

With multiple avenues available to handle possible Market Attribute collision, this imperfect scenario can be resolved. No solution is without tradeoffs, and we work to ensure that our clients have all of the information required to handle these complex scenarios.

You May Also Like…

Akeneo: A CIO’s Guide

Akeneo: A CIO’s Guide

How to Approach Akeneo’s Offering to Leverage the Best Practices of Composable Commerce for your Business Product data...