Most publishers obsess over floor prices and demand partner selection. They spend hours analyzing eCPM trends and testing new SSP integrations. Yet one of the most consistent sources of revenue loss sits quietly inside their Prebid configuration, rarely examined and almost never measured correctly: the bidder timeout.
A timeout is not just a technical setting. It is a financial decision. Every millisecond you add to your timeout window changes who bids, how much they bid, and whether your page renders ads before a user scrolls past the slot entirely. Get it wrong in either direction — too short or too long — and your yield takes a hit that no floor price adjustment will recover.
This guide walks through the full diagnostic process: how to identify whether your current timeout is causing revenue loss, how to isolate which bidders are the root cause, and how to build a testing framework that finds the optimal value for your specific inventory.
What Actually Happens When a Bidder Times Out
When Prebid.js fires an auction, it sends bid requests to all configured bidder adapters simultaneously. Each adapter calls its respective SSP endpoint and waits for a response. The global timeout — set via pbjs.setConfig({ bidderTimeout: X }) — defines the maximum number of milliseconds Prebid will wait before closing the auction and passing whatever bids have arrived to the ad server.
Any bidder that fails to respond within that window is marked as timed out. Its bid is discarded. It does not participate in the auction. From a publisher revenue perspective, this is equivalent to that SSP simply not existing for that impression.
The problem is that timeout behavior is not uniform across bidders. A slow but high-value DSP connected to a specific SSP may consistently miss your 800ms window during peak traffic hours, even though it wins auctions comfortably at 1,200ms. You never see that demand. You never know it was there. Your eCPM looks stable because the floor still gets met by faster bidders — but your ceiling is artificially lower than it should be.
The Two Revenue Leaks Timeouts Create
Timeout misconfiguration creates revenue loss in two distinct ways that often mask each other in aggregate reporting.
The first is excluded high-value bids. When premium demand — typically from brand-safe, high-CPM advertisers running via trading desks — consistently times out, your yield curve flattens. You fill impressions at the market rate set by faster, lower-value bidders. The gap between your actual eCPM and your potential eCPM widens over time without any visible signal in your standard GAM reports.
The second is latency-driven viewability loss. If your timeout is set too high — a common overcorrection — the auction itself delays ad rendering. Slots that take more than 1.5 seconds to begin rendering show measurably lower viewability rates. Lower viewability reduces CPM on subsequent impressions because DSPs use historical viewability signals to inform their bid prices. A timeout set at 3,000ms to “capture every bidder” can actually suppress eCPM on future impressions by degrading the viewability profile of your inventory.
How to Diagnose Your Current Timeout Performance
Before adjusting any settings, you need a clear picture of how your current timeout is affecting each individual bidder. This requires pulling data from two sources simultaneously: your Prebid analytics adapter and your SSP reporting dashboards.
Step 1: Enable Granular Timeout Logging in Prebid
If you are not already using a Prebid analytics adapter, this is the first thing to implement. Without it, you are flying blind. The standard Prebid.js setup supports custom analytics adapters that capture bid response times, timeout events, and win rates per bidder.
At minimum, your analytics pipeline should be capturing the following events for every auction:
- bidTimeout — fires when a specific bidder misses the timeout window
- bidResponse — fires when a bid is received, includes response time in milliseconds
- bidWon — fires when a Prebid bid wins the ad server line item
- auctionEnd — fires when the auction closes, with the full bid landscape at that moment
With this data, you can calculate a timeout rate per bidder. Divide the number of bidTimeout events by the total number of bid requests sent to that bidder over the same period. A timeout rate above 15% for any bidder is a diagnostic flag worth investigating.
Step 2: Build a Bidder Response Time Distribution
An average response time per bidder is not sufficient for this analysis. A bidder that averages 600ms but has a 90th percentile response time of 1,400ms will behave very differently under a 1,000ms timeout than a bidder that averages 600ms with a 90th percentile of 750ms. You need the full distribution.
Build a histogram of bid response times for each bidder using your analytics data. Group responses into 100ms buckets. For each bucket, record how many bids were received. This tells you exactly what percentage of bids from each bidder you are capturing at your current timeout value, and what percentage you would capture at higher or lower values.
The output of this analysis typically reveals a clear pattern: a handful of bidders have long-tail response distributions that make them structurally incompatible with fast timeout settings, while the majority of your demand responds well within 500ms. The question is whether those slow-tail bidders are generating enough incremental revenue to justify the timeout extension they need to participate reliably.
Step 3: Calculate the Revenue Impact of Each Timeout Rate
For each bidder with a timeout rate above your threshold, estimate the revenue impact of those missed bids. This requires cross-referencing your Prebid data with SSP-side reporting.
Pull the following from each SSP dashboard for the same time period covered by your Prebid data:
- Total bid responses sent to your site
- Average CPM of winning bids
- Win rate (bids won divided by bids submitted)
The discrepancy between bids the SSP reports sending and bids Prebid reports receiving is your timeout loss at the network level. Multiply the missed bid count by the SSP’s average winning CPM to get a rough estimate of the revenue floor you are leaving on the table. This number is almost always larger than publishers expect.
The Case for Per-Bidder Timeout Overrides
The single global timeout is a blunt instrument. Prebid.js supports per-bidder timeout overrides, and for most publishers with more than five demand partners, using them is the correct architectural choice.
The logic is straightforward. Your fastest bidders — typically large SSPs with global infrastructure — respond in under 300ms consistently. Your slowest bidders may average 900ms but carry premium demand from specific verticals. Forcing both to compete under the same timeout either penalizes the slow bidders unnecessarily or inflates your auction duration for everyone.
Per-bidder overrides allow you to set a tight global timeout — often 600ms to 800ms — while extending the window for specific partners whose response time distribution and revenue contribution justify the additional latency. The implementation in Prebid.js is handled through the bidderTimeout field in the individual bidder configuration object, which takes precedence over the global setting.
This approach requires the response time distribution data described in the previous section. Without it, per-bidder overrides are guesswork. With it, they become a precise yield optimization lever that most publishers have not yet activated.
Testing Framework: How to Find the Right Number
There is no universally correct timeout value. The optimal setting depends on your page load architecture, your geographic traffic distribution, your device mix, and the specific demand partners in your stack. What works for a US-focused desktop publisher at 800ms may underperform for a mobile-first publisher with significant Southeast Asian traffic, where network latency alone can add 200ms to 400ms to every bid round trip.
The correct approach is incremental A/B testing against a defined revenue metric. Structure the test as follows.
Define Your Control and Variant
Your control is your current timeout setting. Your variant is an adjusted value — typically a 200ms to 400ms increase if your timeout rate analysis suggests you are cutting off significant demand. Run traffic splits of 50/50 using your CMP or a lightweight client-side experiment flag. Avoid running tests shorter than seven days, as weekend versus weekday demand patterns will skew results in shorter windows.
Measure the Right Metrics
The primary metric for a timeout test is revenue per session, not eCPM in isolation. A higher timeout may increase eCPM by including more premium bids while simultaneously reducing the number of viewable impressions per session due to rendering delays. Revenue per session captures both effects in a single number.
Secondary metrics to track across both variants include average bid response time per bidder, timeout rate per bidder, viewability rate per ad unit, and page-level Core Web Vitals — specifically Largest Contentful Paint and Cumulative Layout Shift, both of which are affected by ad rendering timing.
Interpreting the Results
A test result is actionable when you see a statistically significant movement in revenue per session with no material degradation in viewability or Core Web Vitals. If revenue per session improves but viewability drops by more than three percentage points, the net result may be negative over a longer time horizon due to CPM suppression from DSPs re-scoring your inventory.
If the test shows no revenue improvement at a higher timeout, the implication is not that the current timeout is optimal. It may mean that your slow bidders are not generating incremental revenue even when they do respond in time — a separate diagnostic problem related to bid density and demand partner quality, not timeout configuration.
Common Mistakes to Avoid
Setting the timeout to match the slowest bidder’s average response time. Averages mask distribution tails. A bidder with an average response time of 900ms likely has a 95th percentile well above 1,500ms. Setting your timeout to 900ms to “include” this bidder will still result in a 50% timeout rate for that partner during high-latency conditions.
Treating timeout as a set-and-forget configuration. Bidder response times are not static. They vary with infrastructure changes at the SSP level, geographic routing updates, and changes in auction competition. A timeout value that was optimal six months ago may no longer be appropriate. Build a quarterly review of timeout rate data into your AdOps workflow.
Conflating timeout rate with bid rate. A bidder with a low timeout rate is not necessarily generating revenue. It may respond quickly with very low or zero bids. Conversely, a bidder with a high timeout rate may be attempting to bid at premium CPMs but failing to complete the round trip in time. These require different interventions.
Ignoring mobile-specific timeout profiles. Mobile network latency on 4G and LTE connections is measurably higher than desktop broadband, and it is far more variable. If your traffic is predominantly mobile, running the same timeout value across device types is leaving yield on the table. Prebid supports device-type conditional logic that allows you to serve a different timeout configuration to mobile users without affecting desktop auction performance.
Final Takeaway
Bidder timeout is one of the few levers in your programmatic stack that simultaneously affects demand coverage, auction latency, viewability, and long-term CPM scoring. Most publishers have never run a structured test on their timeout value. They inherited a number from their initial Prebid setup and left it untouched through every other optimization cycle.
The diagnostic process outlined here — building per-bidder response time distributions, calculating revenue impact from timeout loss, and running controlled experiments against revenue per session — is the same methodology used by yield teams at large-scale publishers managing hundreds of millions of monthly impressions. It is fully applicable to independent publishers running standard Prebid.js configurations.
The tools are available in open-source Prebid. The data is already being generated by every auction your site runs. The only missing piece is the decision to measure it.

Miles Wentworth is a programmatic advertising specialist with over a decade of hands-on experience in publisher-side AdTech. He has worked across yield management, ad server architecture, and AdOps consulting for digital media properties handling millions of monthly impressions. At PubTech, Miles leads editorial direction with a focus on technical depth, auction mechanics, and actionable revenue strategies for independent publishers.

