You're spending thousands on ads across Meta, Google, TikTok, and LinkedIn. Your dashboard shows promising ROAS numbers. But here's the uncomfortable truth: if your attribution windows are misconfigured, those numbers might be fiction. Attribution windows determine how long after someone clicks or views your ad that a conversion gets credited back to that campaign. Set them too short, and you're blind to conversions that happen after your window closes—making profitable campaigns look like failures. Set them too long, and you're claiming credit for conversions your ads never influenced, inflating performance metrics and leading you to pour budget into campaigns that don't actually drive results.
The stakes are higher than most marketers realize. When your attribution windows don't match your actual customer journey, every optimization decision gets built on faulty data. You scale campaigns that aren't working. You kill campaigns that are. You misallocate budget across channels because you're comparing apples to oranges—Meta's 7-day window versus Google's 30-day window versus your analytics platform's completely different setup.
This isn't theoretical. Your customer journey has a real timeline. Some people see your ad and convert within hours. Others take weeks of research, multiple touchpoints, and several site visits before they're ready to buy. Your attribution windows need to reflect that reality, not arbitrary platform defaults that were chosen for technical convenience rather than your business needs.
This guide walks you through the exact process of configuring attribution windows that match how your customers actually behave. You'll learn how to analyze your sales cycle data, audit your current platform settings, configure windows that capture real conversions without over-attribution, and validate that your settings reflect reality. By the end, you'll have attribution windows that give you confidence in which campaigns truly drive revenue—and the data foundation to make smarter budget decisions.
Before you touch a single platform setting, you need to understand how long your customers actually take to convert. This isn't about assumptions or industry benchmarks—it's about pulling your actual historical data and letting it tell you the truth about your sales cycle.
Start by exporting conversion data from your analytics platform or CRM that shows the time between first touchpoint and final conversion. You're looking for the distribution of conversion times: how many people convert within 24 hours, within 3 days, within a week, within two weeks, and so on. Most analytics platforms can generate this report if you filter for users who converted and look at the time from their first session to their conversion event.
Here's where it gets important: don't just look at average conversion time. Averages lie. If most customers convert in 3 days but a few take 90 days, your average might show 15 days—which doesn't represent either group accurately. Instead, look at the 80th percentile conversion time. This captures the vast majority of your conversions while avoiding the trap of setting windows so long that you're crediting conversions that had nothing to do with your ads.
Segment this analysis by product type, price point, and customer segment. Your $50 impulse purchase product probably has a completely different conversion timeline than your $5,000 enterprise solution. Your returning customers likely convert faster than first-time buyers. If you're running both e-commerce and lead generation campaigns, those journeys look nothing alike. Each segment may need different attribution windows.
Document the typical touchpoint sequences you're seeing. Are customers clicking an ad, visiting your site, leaving, then coming back three days later through organic search before converting? Are they seeing display ads for a week before clicking a retargeting ad and converting immediately? Understanding these patterns helps you configure windows that credit the right touchpoints without over-attributing to every impression along the way.
Pay special attention to where drop-offs occur. If you see a cliff where almost no one converts after 14 days, that's valuable signal. It means setting a 30-day window would mostly credit conversions that happened organically, not because of your ads. Your attribution window should align with the period where your ads are actually influencing behavior.
Now that you know your actual customer timeline, it's time to see what your ad platforms are currently using—and identify the mismatches causing data chaos. Pull up the attribution settings in each platform you're running campaigns on. You're about to discover why your reporting never quite adds up.
Meta Ads defaults to a 7-day click and 1-day view attribution window. That means if someone clicks your ad and converts within 7 days, Meta credits that conversion. If they just see your ad (without clicking) and convert within 1 day, Meta also takes credit. Google Ads uses a 30-day click window by default. LinkedIn uses 30-day click and 7-day view. TikTok uses 7-day click and 1-day view. See the problem? You're comparing campaign performance across platforms that are using completely different measurement standards.
Create a spreadsheet documenting every platform's current settings. Include columns for platform name, click attribution window, view-through attribution window, and attribution model (last-click, data-driven, etc.). Then add a column for your ideal setting based on the customer journey data you mapped in Step 1.
Identify specific mismatches that are distorting your data. If your actual sales cycle shows that 80% of conversions happen within 14 days, but Google is using a 30-day window while Meta uses 7 days, you're dramatically under-crediting Meta campaigns and potentially over-crediting Google campaigns. This mismatch might be why you think Google is your best performer when it's actually just getting credit for conversions that happened well after the ad influence wore off.
Look for discrepancies in your reporting that stem from these window differences. If your total attributed conversions across all platforms add up to more than your actual conversions, you've got overlapping attribution windows causing double-counting. Understanding how to fix attribution discrepancies becomes critical when platforms are showing vastly different conversion counts for the same time period.
Document everything in a baseline comparison document. This becomes your reference point for the changes you're about to make and helps you explain to stakeholders why the numbers are going to shift once you implement consistent windows. When your boss asks why Meta's reported conversions suddenly dropped 20%, you can show them it's not performance—it's measurement accuracy.
Click-through attribution is the most reliable signal you have. Someone saw your ad, engaged enough to click it, and then converted. Now you need to set windows that capture these conversions without claiming credit for purchases that would have happened anyway.
Start with your customer journey data from Step 1. If your 80th percentile conversion time is 10 days, consider setting your click attribution window to 14 days. This gives you a buffer to capture conversions that take slightly longer while avoiding the trap of 30-day windows that credit conversions from users who've long forgotten they ever saw your ad. For most e-commerce and SaaS businesses, windows between 7 and 21 days strike the right balance.
High-consideration purchases need longer windows. If you're selling enterprise software with a 45-day average sales cycle, a 7-day window would make every campaign look like a failure. In these cases, 30-day or even 60-day windows make sense—but only if your data supports it. Don't extend windows based on hope; extend them based on evidence that conversions are actually happening in that timeframe and were influenced by your ads.
Impulse purchases need shorter windows. If you're selling trending fashion items or limited-time offers, most decisions happen within hours or days. A 30-day window would credit your ads for conversions driven by organic social posts, influencer mentions, or seasonal demand that had nothing to do with your paid campaigns. Consider 7-day or even 3-day windows for these products.
Align windows across platforms wherever possible. If you've determined that 14 days is right for your business, configure that same window in Meta, Google, LinkedIn, and any other platform that allows customization. This creates apples-to-apples comparisons and makes cross-platform optimization decisions actually meaningful. Following multi-channel attribution best practices ensures you're comparing channels fairly when deciding whether to shift budget from one platform to another.
Document your rationale for each setting. Write down why you chose 14 days for your main e-commerce campaigns, 30 days for your B2B lead gen, and 7 days for your flash sale promotions. This documentation becomes critical when team members change, when you're training new marketers, or when someone questions why your settings differ from platform defaults. Your decisions should be defensible with data, not arbitrary.
View-through attribution is where things get controversial. This setting credits conversions to ads that users saw but didn't click—and it's the source of more inflated ROAS claims than any other attribution setting. Approach this with healthy skepticism and conservative defaults.
The fundamental question with view-through attribution is whether someone who saw your ad but didn't engage with it was actually influenced by it, or whether they would have converted anyway through other channels. Meta and other platforms want you to believe every impression drives value. The reality is murkier. Some display and video ads genuinely build awareness that contributes to later conversions. Many impressions are served to users who scroll past without processing them.
Start with a 1-day view-through window as your baseline. This is the industry standard for good reason—it captures scenarios where someone sees your ad, doesn't click immediately, but then searches for your brand or visits your site directly within 24 hours. That's plausible influence. A 7-day or 30-day view-through window, however, credits conversions to ads the user probably doesn't even remember seeing.
Consider disabling view-through attribution entirely for bottom-funnel campaigns. If you're running retargeting to people who've already visited your site or added items to cart, you want clicks—not just impressions. These users already know about you. If your ad isn't compelling enough to generate a click, it probably didn't drive the conversion. Disabling view-through for these campaigns gives you cleaner signal about what's actually working.
Test different view-through settings against incrementality data when you have it. Run holdout tests where you exclude a segment of users from seeing your ads, then compare their conversion rates to the exposed group. If the lift is minimal, your view-through conversions are mostly people who would have converted anyway. Understanding conversion window attribution helps you validate whether those impressions are capturing real influence—but verify this with data, not platform claims.
Be especially cautious with programmatic display and video campaigns. These formats generate massive impression volumes, and even a 1-day view-through window can credit thousands of conversions that had nothing to do with your ads. If your view-through conversions far exceed your click-through conversions, you're probably over-attributing. Scale back the window or disable it entirely until you can validate that those impressions are actually driving incremental results.
Even with perfectly configured windows on each platform, you'll still face a fundamental problem: each platform only sees its own touchpoints. Meta doesn't know about the Google ad someone clicked yesterday. Google doesn't know about the TikTok video they watched last week. To get accurate attribution, you need a unified view that tracks the entire customer journey regardless of where touchpoints happen.
This is where centralized attribution platforms become essential. Tools like Cometly capture every touchpoint—ad clicks, site visits, email opens, CRM events—and connect them to individual customer journeys. Instead of relying on each ad platform's limited view, you get a complete picture of how channels work together. Exploring multi-touch attribution models lets you apply consistent attribution logic across all your marketing, comparing channels fairly and understanding which combinations drive the best results.
Set up unified tracking that works across all your channels. Implement UTM parameters consistently across every campaign, platform, and creative. Following UTM parameter best practices ensures you're capturing both paid and organic touchpoints so you're not just seeing the ad clicks but also the organic search visits, direct traffic, and email interactions that happen between ad exposures. This complete view is what lets you configure attribution windows that reflect reality rather than platform-specific silos.
Configure server-side tracking to maintain accuracy despite browser restrictions. With iOS privacy changes and browser cookie limitations, client-side tracking misses an increasing percentage of conversions. Server-side tracking sends conversion data directly from your server to ad platforms, bypassing browser restrictions and ensuring you're capturing the full picture. This is especially critical for maintaining accurate attribution windows—if you're missing 30% of your conversions due to tracking limitations, your windows are configured based on incomplete data.
Create standardized reporting that compares channels using the same attribution logic. Build dashboards that show performance across Meta, Google, LinkedIn, and other channels using your chosen attribution windows and models—not each platform's defaults. Following attribution reporting best practices reveals patterns you'd never spot looking at platform-specific dashboards in isolation when making budget allocation decisions.
Configuration is just the beginning. Now you need to validate that your attribution windows are actually capturing reality—not over-crediting or under-crediting your campaigns. This testing phase separates marketers who trust their data from those who just hope it's right.
Run A/B tests comparing different attribution windows on identical campaigns. Duplicate a campaign, configure one with your new window settings and one with platform defaults, and run them simultaneously with equal budget. Compare the attributed conversions and ROAS against the actual revenue each campaign drove in your CRM or order system. The version that matches reality most closely is your winner.
Cross-reference attributed conversions with actual CRM data and revenue. Pull a list of all conversions your ad platforms claim credit for during a specific week. Then pull the actual customer records and revenue from your CRM or order system for that same period. Do the numbers align? Implementing best practices for tracking conversions accurately helps you identify whether platforms are reporting 500 conversions while your CRM shows 300 new customers, indicating over-attribution.
Look for signs of over-attribution. If your total attributed conversions across all platforms add up to more than your actual conversions, your windows are too long or your view-through settings are too generous. If your attributed revenue exceeds your actual revenue, same problem. These are clear signals that you're crediting the same conversions to multiple touchpoints or claiming credit for conversions your ads didn't influence.
Watch for under-attribution signals too. If your ad spend is profitable based on actual revenue but your platforms show terrible ROAS, you might have windows that are too short or tracking gaps that miss conversions. If you see strong business growth during periods of heavy ad spend but can't attribute it to specific campaigns, your measurement needs adjustment—either longer windows or better tracking implementation.
Set calendar reminders to review attribution window performance quarterly. Customer behavior changes. Your product mix evolves. Seasonal patterns affect purchase timelines. What worked in Q1 might not work in Q4. Schedule regular reviews where you pull fresh customer journey data, compare it to your current window settings, and adjust as needed. Attribution windows aren't a one-time configuration—they're an ongoing optimization process.
Attribution windows are the foundation of accurate marketing measurement. Get them right, and you have confidence in which campaigns drive revenue. Get them wrong, and every optimization decision is built on sand. The good news is that you now have a systematic process for configuring windows that reflect your actual customer journey rather than arbitrary platform defaults.
Here's your quick-reference checklist to implement what you've learned. Map your actual customer journey timeline before touching any settings—pull historical data showing time from first touch to conversion, segment by product and customer type, and identify your 80th percentile conversion time. Audit current platform defaults against your sales cycle and document the mismatches causing data discrepancies. Configure click windows to capture your 80th percentile conversion time, typically between 7 and 21 days for most businesses, with longer windows only for high-consideration purchases where data supports it.
Set conservative view-through windows—1-day is standard and safe—unless you have incrementality data proving longer windows capture real influence. Consider disabling view-through entirely for bottom-funnel campaigns where clicks matter most. Implement cross-platform tracking for unified attribution that sees the complete customer journey, not just individual platform silos. Proper data integration ensures you maintain accuracy despite browser restrictions and privacy changes.
Test and validate quarterly against actual revenue data. Run A/B tests comparing different window settings, cross-reference attributed conversions with CRM records, and look for signs of over-attribution or under-attribution. Applying marketing budget allocation best practices becomes much easier once your attribution windows accurately reflect customer behavior and your product mix changes.
Attribution windows aren't a set-it-and-forget-it configuration—they should evolve as your business, products, and customer behavior change. Start with these foundational settings based on your actual data, then refine based on what your testing reveals about how customers actually convert. The goal isn't perfection; it's continuous improvement toward measurement that reflects reality.
Ready to elevate your marketing game with precision and confidence? Discover how Cometly's AI-driven recommendations can transform your ad strategy—Get your free demo today and start capturing every touchpoint to maximize your conversions.
Learn how Cometly can help you pinpoint channels driving revenue.
Network with the top performance marketers in the industry