You're reviewing campaign performance across Meta and Google Ads, and something doesn't add up. Meta shows 150 conversions from your retargeting campaign, but Google Analytics only credits 98 to the same audience. Your boss asks which number to trust for next month's budget planning, and you're not entirely sure.
The culprit? Attribution window settings.
These seemingly technical configurations determine which ads get credit for conversions and how much budget flows to each channel. Yet many marketers never adjust the defaults, leading to skewed data, misallocated budgets, and missed opportunities. Understanding attribution windows isn't just about fixing reporting discrepancies. It's about building a measurement framework that reflects how your customers actually buy, so you can scale the campaigns that truly drive revenue.
In this guide, we'll break down exactly how attribution windows work, why platforms set different defaults, and how to configure the right lookback period for your specific business. By the end, you'll know how to align your attribution settings with your sales cycle and make confident decisions about where to invest your ad spend.
An attribution window is the time frame between when someone interacts with your ad and when they convert, during which that ad can receive credit for the conversion. Think of it as the memory span of your tracking system. If someone clicks your ad on Monday and purchases on Wednesday, that conversion only gets attributed to your ad if your attribution window is at least two days long.
Here's where it gets more nuanced. There are actually two types of attribution windows working simultaneously: click-through windows and view-through windows.
Click-through windows measure the time between when someone clicks your ad and when they convert. If you set a 7-day click-through window, any conversion happening within seven days of an ad click gets credited to that campaign. This is the more reliable and commonly used window because the user took a direct action on your ad.
View-through windows measure the time between when someone sees your ad (without clicking) and when they convert. If you set a 1-day view-through window, conversions happening within 24 hours of someone viewing your ad can be credited to that impression. This matters especially for display and video campaigns where users might see your ad, remember your brand, and convert later through a different path.
The length of your attribution window directly shapes which touchpoints receive credit in your customer journey. A shorter window captures only the final interactions before conversion, typically favoring bottom-funnel campaigns like search ads and retargeting. A longer window captures earlier touchpoints, giving credit to awareness campaigns, cold prospecting, and educational content that started the journey.
Let's say you run a furniture store. A customer sees your display ad on January 1st, clicks your retargeting ad on January 5th, searches your brand name on January 8th, and purchases on January 10th. With a 7-day click window, only the retargeting campaign gets credit because the brand search happened outside the window. With a 14-day click window, both the retargeting campaign and the brand search campaign could receive credit, depending on your attribution model.
This is why attribution windows matter so much for budget allocation. They fundamentally change which campaigns appear successful in your reporting, which influences where you spend next month's budget.
If you've ever wondered why conversion numbers never match across platforms, attribution window defaults are a major reason. Each ad platform sets its own default windows based on different priorities, and these differences create reporting chaos for marketers tracking performance across multiple channels.
Meta's current default attribution window is 7-day click and 1-day view. But this wasn't always the case. Before Apple's iOS 14.5 update in April 2021, Meta used a 28-day click and 28-day view window as its default. When Apple's App Tracking Transparency framework limited tracking capabilities, Meta shortened its windows to reflect the reduced visibility into user behavior. This change meant that campaigns suddenly showed fewer conversions overnight, not because performance dropped, but because the measurement window contracted.
For campaigns running today, Meta allows you to choose between 1-day click, 7-day click, and 1-day view windows in various combinations. The platform no longer offers 28-day windows for new campaigns, though some older campaigns may still use legacy settings. This shift reflects the reality of privacy-first tracking where longer windows become less reliable. Understanding the Facebook Ads attribution window limitations helps you work within these constraints effectively.
Google Ads takes a different approach entirely. Search campaigns typically default to a 30-day click window with no view-through attribution. Display and video campaigns often use a 30-day click window and a 1-day view window. Shopping campaigns default to 30-day click windows as well. Google's longer default windows reflect the platform's strength in capturing intent-based searches where the customer journey from awareness to purchase often spans weeks.
Why do platforms choose such different defaults? It comes down to where they sit in the customer journey and how they want their performance to look. Google dominates bottom-funnel search intent, where longer windows capture more conversions without inflating numbers artificially. Meta excels at top and mid-funnel awareness, but shorter windows help the platform avoid over-attribution in a privacy-limited environment where tracking becomes less accurate over time.
These default differences create a critical problem when comparing performance across platforms. If Meta uses a 7-day window and Google uses a 30-day window, Google will naturally show more conversions because it has a longer memory. This doesn't mean Google performs better. It means the measurement frameworks aren't aligned, making cross-platform comparisons meaningless without normalization.
The same conversion might be counted differently across platforms. A customer who clicks a Meta ad on day 1, clicks a Google ad on day 10, and converts on day 15 would be credited to Google (within its 30-day window) but not to Meta (outside its 7-day window). Both platforms influenced the journey, but only one receives credit in its native reporting.
The right attribution window isn't a universal number. It depends entirely on how long your customers take to make buying decisions. Choosing a window that matches your actual sales cycle ensures you're crediting the touchpoints that genuinely influence conversions without inflating numbers with coincidental views.
Short windows work for impulse purchases and low-consideration products. If you sell consumable goods, fashion accessories, or digital products under $50, customers often decide within minutes or hours of seeing an ad. A 1-day to 7-day click window captures the direct-response behavior that drives these purchases. E-commerce brands selling trending products or running flash sales typically find that 90% of conversions happen within three days of ad interaction, making longer windows unnecessary and potentially misleading.
Medium windows suit considered purchases with moderate research phases. Products like electronics, home goods, or services requiring comparison shopping often see conversion windows between 7 and 14 days. Customers might see your ad, browse your site, read reviews, compare competitors, and return to purchase within two weeks. A 7-day to 14-day window captures this deliberation period without extending so far that you're crediting ads the customer forgot about.
Extended windows match B2B sales cycles and high-ticket purchases. If you sell enterprise software, consulting services, or products over $5,000, your sales cycle likely spans weeks or months. A 28-day window (or longer, if your platform allows) reflects the reality that prospects need multiple touchpoints, demos, proposals, and internal approvals before converting. B2B marketers often find that the first touchpoint happens 30 to 90 days before a deal closes, though most platforms cap attribution windows at 28 or 30 days.
How do you determine your actual sales cycle length? Start with your CRM or analytics data. Pull a report showing the time between first website visit (or first ad click) and conversion for your last 100 customers. Calculate the median time to conversion, not the average, since outliers can skew the number. If your median customer converts within 5 days, a 7-day window captures most legitimate conversions. If it's 18 days, you need at least a 21-day window to avoid undercounting campaign impact. For a deeper dive, explore attribution window best practices for paid ads.
Consider different windows for different campaign objectives. Retargeting campaigns targeting cart abandoners might perform best with a 1-day window since these users are already primed to convert quickly. Cold prospecting campaigns introducing your brand for the first time need longer windows because they start the journey rather than close it. Using the same window for both campaign types misrepresents their true value.
Your product mix also matters. If you sell both impulse items and considered purchases, segment your analysis by product category. You might discover that accessories convert within 3 days while furniture purchases take 21 days. This insight allows you to evaluate campaign performance more accurately by product type rather than using a one-size-fits-all window.
Even experienced marketers make attribution window mistakes that quietly corrupt their data and lead to poor budget decisions. These errors are particularly dangerous because they're invisible in your dashboards. The numbers look fine until you realize you've been optimizing for the wrong signals.
Using identical windows across platforms when comparing performance creates false equivalencies. Many marketers set all platforms to 7-day click windows thinking this creates fair comparisons. But platforms track differently, have different conversion pixel implementations, and serve different roles in the customer journey. Forcing identical windows doesn't create accurate comparisons. It just masks the underlying differences in how platforms contribute to conversions. You end up comparing apples to oranges while thinking you've normalized the data.
The better approach? Understand that cross-platform comparisons require more than matching windows. You need unified tracking that follows the same user across platforms, not just similar measurement periods. Otherwise, you're still counting the same conversion multiple times across platforms or missing conversions that involved multiple touchpoints. Learning how to fix attribution discrepancies in data is essential for accurate reporting.
Setting windows too short systematically undervalues awareness campaigns and top-funnel efforts. If you run brand awareness campaigns with a 1-day attribution window, you'll see almost no conversions because awareness doesn't drive immediate purchases. The campaign might be highly effective at starting customer journeys, but your measurement framework can't see it. You conclude the campaign failed and cut budget, eliminating the very touchpoint that feeds your retargeting audiences and bottom-funnel campaigns. This is a common attribution window too short problem that plagues many marketing teams.
This mistake is especially common when marketers copy settings from direct-response campaigns to awareness campaigns without adjusting for different objectives. A retargeting campaign and a cold prospecting campaign serve completely different purposes and need different measurement approaches. Using the same short window for both makes awareness look ineffective by design.
Ignoring view-through attribution entirely misses the impact of display and video campaigns. Some marketers disable view-through windows completely, arguing that impressions without clicks shouldn't receive credit. While this reduces potential over-attribution, it also eliminates visibility into how visual campaigns influence behavior. Video ads, display campaigns, and even some social ads drive conversions through brand recall rather than immediate clicks. A user might see your video ad, not click, but search your brand name two hours later and convert. With no view-through window, that video ad gets zero credit despite directly influencing the conversion.
The key is using conservative view-through windows (typically 1 day) rather than eliminating them entirely. This captures genuine influence while minimizing coincidental attribution where someone happened to see your ad but would have converted anyway.
Never reviewing or updating attribution settings after initial campaign setup. Your sales cycle changes as your business evolves. A startup selling to early adopters might have a 3-day sales cycle that extends to 14 days as you move upmarket to larger customers. Seasonal factors affect timing too. Holiday shopping compresses decision-making, while summer months might see longer consideration periods. Attribution windows set in January might be completely wrong by June, but most marketers never revisit them.
Build a quarterly review into your process. Check your actual time-to-conversion data and adjust windows if your sales cycle has shifted. This simple habit prevents your measurement framework from drifting out of sync with customer behavior.
Configuring the right attribution windows isn't guesswork. It's a systematic process of analyzing your data, testing different configurations, and documenting what works for your specific business. Here's how to approach it methodically.
Start by analyzing your actual time-to-conversion data. Pull conversion data from your CRM, analytics platform, or e-commerce system for the past 90 days. You need two data points for each conversion: the timestamp of the first known interaction (ad click, website visit, or form submission) and the timestamp of the conversion. Calculate the number of days between these events for each customer.
Create a distribution chart showing what percentage of conversions happen within 1 day, 3 days, 7 days, 14 days, 21 days, and 28 days of first interaction. This reveals your natural conversion curve. If 85% of conversions happen within 7 days, a 7-day window captures most legitimate attribution. If conversions are evenly distributed across 30 days, you need a longer window to avoid undercounting. Conducting thorough attribution window analysis reveals these patterns clearly.
Look for different patterns by traffic source and campaign type. Branded search conversions might happen within hours, while cold email campaigns might take weeks to convert. Segment your analysis to understand these differences rather than averaging everything together.
Test different window configurations and observe how attribution shifts between channels. Choose two different window settings (like 7-day versus 14-day click windows) and run them in parallel for 30 days. Many platforms allow you to view historical data with different attribution windows applied retroactively, making this testing easier.
Compare how conversions are distributed across campaigns under each window setting. Does extending the window from 7 to 14 days significantly increase conversions for your prospecting campaigns while barely changing retargeting numbers? That suggests your actual sales cycle is longer than 7 days, and the shorter window was systematically undercrediting top-funnel efforts.
Pay attention to which window length creates the most stable and predictable results. If switching from 7 to 14 days causes wild swings in reported performance, your tracking might be inconsistent or your sales cycle might be genuinely shorter. The right window should reveal clearer patterns, not create more noise.
Document your settings across platforms and establish a consistent measurement framework. Create a simple spreadsheet listing every ad platform you use, the attribution windows configured for each campaign type, and the reasoning behind those choices. Include the date you last reviewed these settings and when you plan to review them again.
This documentation serves two purposes. First, it ensures consistency when launching new campaigns. Instead of accepting platform defaults or guessing, you have a documented standard based on your actual sales cycle. Second, it creates accountability. When performance changes, you can check whether attribution settings changed inadvertently or remained consistent.
Establish rules for different campaign types within your documentation. For example: "All retargeting campaigns use 7-day click / 1-day view windows. All cold prospecting campaigns use 14-day click / 1-day view windows. All brand awareness campaigns use 14-day click / 1-day view windows." These rules create consistency while acknowledging that different campaign objectives need different measurement approaches. For more guidance, review attribution window optimization strategies.
Build bridges between platform-level attribution and unified tracking. Platform-native attribution windows will always have limitations because they only see their own touchpoints. A customer might click your Meta ad, then your Google ad, then convert. Each platform claims 100% credit using its own attribution window, leading to double-counting.
This is where comprehensive attribution platforms become essential. Tools that track the complete customer journey across all touchpoints can show you which attribution window settings most closely match reality. They provide the ground truth you need to configure platform-level settings intelligently rather than blindly trusting defaults. Understanding multi-touch attribution models helps you see how different touchpoints contribute to conversions.
Attribution windows are strategic choices, not technical details to set once and forget. The right configuration reflects how your customers actually buy, ensures fair credit distribution across your marketing mix, and creates the foundation for confident budget decisions.
Remember these core principles: Match your window length to your sales cycle, not to platform defaults. Use different windows for different campaign objectives because awareness campaigns and retargeting campaigns operate on different timelines. Review your settings quarterly as your business evolves and your customer journey changes. Document your choices so your entire team measures performance consistently.
When your attribution windows align with reality, your data tells a clearer story. You can see which campaigns truly drive revenue, which channels deserve more budget, and where to cut spending without fear of eliminating hidden value. You move from guessing about campaign performance to knowing with confidence which levers to pull for growth.
The challenge is that platform-level attribution windows, no matter how well configured, still only show part of the picture. Each platform tracks its own touchpoints in isolation, leading to fragmented data and overlapping credit. True attribution confidence comes from seeing the complete customer journey across every channel, ad, and interaction.
Ready to elevate your marketing game with precision and confidence? Discover how Cometly's AI-driven recommendations can transform your ad strategy. From capturing every touchpoint to feeding better data back to your ad platforms, Cometly provides the complete attribution picture you need to scale campaigns with certainty. Get your free demo today and start making budget decisions based on the full truth of what's driving your conversions.