Skip to main content

Sitation Webinars

The Syndication Shuffle Webinar Series: Episode 1

When: June 25, 2025 at 1:00 pm Eastern Standard Time

Duration: 30 minutes

Georgette Suggs headshot

Georgette Suggs

Director, Data & Content Services

Close
Jessica Bagby headshot

Jessica Bagby

Senior Director, Managed Services

Close
The Syndication Shuffle Webinar Series: Episode 1

Watch the recording now!

Episode 1: Foundations for Growth

Designed specifically for PXM leaders and owners, we will share insights we’ve gained from running over 300+ PXM implementations & integrations.

  • Develop a balanced, scalable digital shelf strategy.
  • Explore PXM solution configuration and integration.
  • Drive applicable insight to avoid common PXM pitfalls.
  • Exchange ideas on running successful syndication programs.

As we explore long-term goals for navigating syndication success, Episode 1: Foundations for Growth will highlight the essential tools and insights needed to build a scalable strategy rather than simply maintain the status quo.

Transcript

Speakers: Jessica Bagby (Senior Director, Managed Services, Sitation) and Georgette Suggs (Director, Data & Content Services, Sitation)


Jessica Bagby:
Welcome, everyone, to the first episode of our series, The Syndication Shuffle. In this series, we’ll explore common syndication challenges and where to start when building a program that delivers results.

I’m Jessica Bagby, and I lead the Managed Services team here at Sitation. I’ve worked in this space for about 10 years across PIM management, syndication, DTC programs, and digital merchandising.

Georgette Suggs:
I’m Georgette Suggs, Director of Data and Content Services at Sitation. My team implements PIM solutions and optimizes them for clients. I’ve been in the industry for around 18–19 years. I started in manufacturing, where I owned product item data end to end—from development systems through Syndigo, Salsify, 1WorldSync, client portals, and retailer trading portals. I genuinely love product data.

Jessica:
We call this series The Syndication Shuffle because it reflects how people, process, and technology work together to deliver content to trading partners, retailers, DTC endpoints, and everyone who needs brand and manufacturer content. The ecosystem is always evolving.

Every brand, manufacturer, and retailer wants best-in-class digital shelf operations. That’s a tall order with many moving parts. In this series, we’ll share what we see with clients and at industry events, along with our practical experience.

Today we’re starting at the beginning: building the foundation.

Jessica:
Georgette, what is the foundation of successful syndication? Where do we start?

Georgette:
With a strong data model. That’s fundamental, regardless of platform.

Jessica:
Let’s talk about the parts of a data model.

Georgette:
A strong data model has three core components:

  1. Core Data
    Weights, dimensions, and technical specifications. Many consider this “master data.” It changes infrequently and should be stored in a system that delivers clean, structured values downstream.
  2. Content
    In practice, we often say “content” to mean everything about a product. Here I’m using it to mean the marketing side: titles, bullets, long descriptions, feature statements, and similar copy. The line between content and core data can blur. For example, a “claim” may rely on both.
  3. Assets
    It’s more than a single hero image. Asset needs vary by trading partner. Some only require a hero, while others need GS1-standard image sets (front, back, left, right, top, bottom). Many want a mix.

The most important point: Core data, content, and assets must agree. If an image shows 16 oz, the core data field should say 16 oz, and any feature bullet that references net weight should also say 16 oz. Inconsistency erodes consumer trust.

Is consistent but wrong better than inconsistent? Only temporarily. If everything says 16 oz but it should be 13 oz, you must correct all three together. Consistency is step one. Accuracy is step two, and it should follow as soon as possible. If you publish inconsistent information, you will work twice as hard to fix it later.

Jessica:
We’ve simplified it into three buckets—core data, content, and assets—but each bucket holds many attributes across many systems and teams. What data points do clients struggle with most, and how do we approach them?

Georgette:
A common pain point is regulated and legal content, especially in food and beverage, but also in medical and other categories. Claims and regulatory answers are challenging because packaging, formulations, and labels change. When formulas change, nutrition facts may change, and then claims may change. In the United States, CFR 21 governs many of these rules.

Regulatory and legal teams can be hesitant to provide definitive answers because accuracy is time-bound to a specific label and batch. Still, they need to be involved. Relying on sales or frontline teams to provide claims often leads to inconsistencies because they use whatever product is in hand at the moment.

Advice: Involve Regulatory, Legal, and Quality early. They should provide or validate regulated content, claims, and facts. It is hard to get buy-in, but it prevents bigger issues downstream.

Jessica:
Great context. Moving to how we get the right components: once you know what you need, how do you source it, and what should teams watch out for?

Georgette:
First, identify the sources for each attribute and asset. Do not scrape retailer sites or Google your own product information. Use internal systems of record.

For each attribute, document:

  • The system of record and how data is extracted.
  • The data owner and how values are determined. For example, who sets weight and did they actually weigh the item?
  • Update cadence and business rules.

Create a data dictionary that lists each attribute, its definition, allowed values or formats, source system, owner, update frequency, and rules or guardrails by product type. This dictionary becomes the blueprint you can implement in your PIM or related systems.

Jessica:
So far we’ve focused on internal drivers of the data model. What about external influences—retailer requirements? Should teams start with exports from Walmart, Target, and Amazon and work backward, or the other way around?

Georgette:
It depends on your maturity.

  • If you are early in your journey, start by pulling retailer requirements from your top channels, compare and normalize, then backfill internal sources and ownership.
  • If you already have strong internal sources and a data dictionary, map your internal model to downstream requirements, identify gaps, and add needed attributes.

Both paths work. Choose based on the maturity of your data processes.

Jessica:
Is the data model ever “done”?

Georgette:
No. Retailers change schemas frequently. Shopper needs evolve. Regulations change. Your data model should be living and maintained continuously.

Jessica:
Most PIM and syndication platforms are built for change, whether changes are driven by retailer updates or by brand needs. Look for platforms that alert you to schema changes and support internal change management. Treat data model maintenance as an ongoing process with regular review.

Georgette:
A question for you, Jess. If a company has a large portfolio, should it build a data model for each division?

Jessica:
Start with a core data set that applies to all products: title, dimensions, weight, and other universal attributes. Then add category-specific attribution. If you need a reference, retailer specs can guide you. For example, Wayfair has a very detailed attribution model for CPG and hard goods. Looking ahead, AI and answer engines will make specification quality even more critical as shoppers ask conversational systems for product recommendations.

Georgette:
One last question: should teams use an industry standard as a starting point?

Answer: Yes. GS1 is an excellent baseline for anything with a barcode. Use the GS1 Global Data Model as your starting core set, then layer your industry-specific attributes on top.

Jessica:
That flew by. Thank you to everyone who attended. We’ll continue this series to help teams build successful syndication programs.

Georgette:
If you have topics you’d like us to cover, let us know. In our next episode, we’ll discuss process—the workflows needed to manage data changes and keep everything aligned.

Jessica:
Thanks for joining us, and thank you, Georgette.

Georgette:
Thank you, Jessica. See you next time.