Unique IP Votes: What Contest Platforms Actually Detect (2026)
Contest platforms detect ASN reputation, datacenter vs residential IPs, IPv4/IPv6 patterns, and geographic clustering. Learn what unique IP votes really mean in 2026.
By Victor Williams · Published · Updated
IP / Proxy · Guide
Unique IP Votes: What Contest Platforms Actually Detect (2026)
Unique IP votes in online contests are ballot submissions made from distinct IP addresses, typically enforced through per-IP deduplication logic. But uniqueness alone is no longer sufficient — contest platforms in 2026 classify IPs by ASN reputation, IPv4 vs IPv6 origin, datacenter vs residential/mobile assignment, and geographic clustering to determine whether each unique IP represents a believable human voter.
What Do Contest Platforms Actually Check Beyond Simple IP Uniqueness?
Contest platforms in 2026 apply a multi-layer IP analysis that goes well beyond confirming an address hasn't voted before. They classify each IP by ASN (the network it belongs to), IP type (residential, mobile, datacenter, VPN, proxy, Tor), geographic coherence with the contest's expected audience, subnet clustering patterns, and behavioral session signals. A technically unique IP that fails these secondary checks is often silently discarded or held for review.
The shift from simple deduplication to layered IP intelligence happened gradually between 2019 and 2023. Early platforms stored visited IPs in a hash table and blocked repeats. Then IP rotation made that trivial to circumvent, and platforms started querying commercial geolocation databases. Then bulk proxy services proliferated and IP intelligence vendors — MaxMind, IPinfo.io, Spur.us — built specialized classifications for proxy and hosting IPs. By 2023, the major voting widget providers had integrated these databases as standard components of their fraud stacks.
The practical implication is significant. A vote delivered from a unique IP in an AWS data center in Virginia is not a “unique IP vote” in any meaningful quality sense — it is a vote from a known hosting range that will be classified as non-residential by any platform using MaxMind GeoIP2 or IPinfo’s detection API. The IP is unique, technically, but it carries a fraud-risk classification that typically results in silent discard or flagging.
What platforms are actually trying to reconstruct from IP data is a probabilistic answer to the question: “Does this IP represent a real person, in a plausible location, using a normal internet connection?” The technical layers they examine are all different evidence streams toward that single question.
How Does ASN Reputation Work as a Primary Detection Signal?
An Autonomous System Number groups IP ranges under a single network operator — an ISP, cloud provider, or enterprise. ASN reputation is a pre-computed trust score assigned to the entire network based on its abuse history, IP type classification, and organizational category. A vote from Comcast ASN 7922 (residential broadband) carries inherently higher trust than a vote from Hetzner ASN 24940 (German datacenter) — regardless of what the individual IP's history shows.
The ASN layer is powerful precisely because it operates at scale with very low false-positive rates for the platform. They do not need to evaluate the individual IP — they evaluate the network class. This is efficient and works extremely well because the business model of different ASN operators is fundamentally different: Comcast sells residential broadband to households; Hetzner sells servers to developers. The populations using each ASN look structurally different in their traffic patterns.
IP intelligence vendors maintain ASN classification databases that update continuously. When a new hosting provider launches and begins allocating IP space, their ASN gets classified as “hosting” within weeks. The classification persists even after IPs are resold or reassigned, because the ASN itself doesn’t change. This is why buying “clean” IPs from a hosting provider doesn’t help — the ASN classification follows the IP block, not the individual address.
| ASN Category | Example Operators | Risk Level | Typical Platform Treatment | Trust Score Estimate |
|---|---|---|---|---|
| Consumer ISP (residential) | Comcast, AT&T, BT, Verizon Home, Spectrum | Low | Accepted; counted toward valid unique IPs | 85–95/100 |
| Mobile Carrier | T-Mobile, Verizon Wireless, Vodafone, Orange | Very Low | Accepted; some platforms loosen deduplication for mobile NAT | 88–96/100 |
| Business ISP (small office) | Regional ISPs, small business broadband | Low-Medium | Generally accepted; lower volume expected | 70–85/100 |
| Residential Proxy Pool | Bright Data, Oxylabs residential pools, Smartproxy | Medium-High | Flagged by Spur.us integration; held for review on aware platforms | 40–65/100 |
| Consumer VPN Exit | NordVPN, ExpressVPN, Mullvad exits | High | Flagged or rejected; VPN classification in MaxMind | 20–40/100 |
| Cloud / Datacenter | AWS, GCP, Azure, DigitalOcean, Hetzner, Linode | Very High | Rejected outright on modern platforms; IP range blocklisted | 5–20/100 |
| Tor Exit Node | Tor Project exit IPs | Maximum | Universally blocked; Tor exit lists are publicly available | 0–5/100 |
For vote delivery that uses IP-diverse residential infrastructure, the ASN composition of the delivery pool is the single most important quality indicator. Ask any provider for their ASN distribution. If they cannot answer, their IPs are likely concentrated in hosting or proxy ranges.
Does IPv6 Provide Better Detection Avoidance Than IPv4 for Contest Votes?
IPv6 does not provide inherent detection avoidance — it is evaluated on the same ASN and IP-type classification framework as IPv4. A consumer ISP IPv6 address is trusted on the same basis as a consumer ISP IPv4 address. A datacenter IPv6 address is flagged on the same basis as a datacenter IPv4 address. The protocol version is irrelevant; the organizational owner of the IP space is what matters to detection systems.
There is a persistent myth in the contest vote market that IPv6 addresses are harder to block because the address space is enormous (2^128 addresses vs 2^32 for IPv4) and full blocklisting is impractical. This is technically correct but operationally irrelevant. Platforms do not block individual IPv6 addresses — they block ASN ranges or classification categories. An IPv6 address from DigitalOcean’s ASN is just as blocked as an IPv4 address from the same ASN.
Where IPv6 does create some operational nuance is at the consumer ISP level. Many home broadband providers assign IPv6 addresses from dynamic /64 prefix pools, meaning the same household may use many different IPv6 addresses over time. This technically expands the apparent IP diversity from residential users but does not give datacenter-origin traffic any advantage.
The relevant comparison is not IPv4 vs IPv6 but residential vs non-residential, which maps cleanly onto ASN category regardless of IP version.
| Detection Dimension | IPv4 Behavior | IPv6 Behavior | Platform Impact |
|---|---|---|---|
| ASN Classification | Well-catalogued by MaxMind, IPinfo | Same databases cover IPv6 ASNs | Identical — ASN type determines trust |
| Datacenter Detection | AWS/GCP/Azure ranges fully documented | Same providers have IPv6 allocations, equally documented | No IPv6 advantage for datacenter origin |
| Consumer ISP Assignment | Dynamic DHCP; typically /32 per household | Dynamic /64 prefix; potentially many /128 addresses per household | Slightly higher apparent diversity from residential IPv6 |
| Mobile Carrier | Carrier NAT; multiple users per IP | IPv6 prefix per device on modern carriers; cleaner 1:1 mapping | Mobile IPv6 slightly higher trust; NAT-sharing issue reduced |
| VPN/Proxy Classification | Commercial VPN exits well-catalogued | Most VPN providers use IPv4 exit; IPv6 is less common | Minor IPv6 gap in VPN catalog — closes as IPv6 adoption grows |
Why Does Geographic Dispersion Matter, and What Patterns Trigger Fraud Alerts?
Geographic dispersion in IP voting should mirror the organic geographic profile of a contest's actual audience — concentrated in the expected region, not randomly distributed globally. A global scatter pattern is actually a fraud signal on audience-specific contests like radio polls or local business awards. Coherent, audience-consistent geographic distribution is more valuable than maximum diversity.
This is counterintuitive for many buyers, who assume that geographic diversity is always desirable. It depends entirely on the contest. A “best restaurant in Austin, Texas” competition should have voters predominantly from Texas — specifically central Texas. A vote delivery campaign that distributes evenly across all 50 US states looks unnatural relative to what a locally focused contest would attract organically.
The coherence principle applies at multiple levels. Country-level coherence is the most obvious: a US contest should have US-origin IPs. State-level coherence matters for regional contests. City-level coherence matters for local awards. The more granular the expected audience, the more geographically targeted the delivery should be.
Clustering is the geometric inverse of this principle. If geographic coherence is about having the right distribution, clustering is about not having the wrong concentration. Twenty votes from the same /24 subnet in two minutes is a clustering event that suggests a shared exit node — even if all 20 IPs are technically unique and technically within the right geography. Subnet diversity is the operational requirement that corresponds to the clustering detection surface.
In our 2025 campaign reviews across 10,000+ IP vote deliveries, the pattern that most consistently triggered platform audit flags was not geographic over-diversity but subnet clustering: groups of 5–15 votes arriving from the same /24 block within narrow time windows. Addressing this requires genuine IP pool depth — hundreds or thousands of distinct residential IPs drawn from multiple ISPs across the target region.
What Behavioral Session Signals Do Platforms Layer on Top of IP Checks?
Beyond IP reputation, contest platforms apply session-level behavioral scoring to each vote attempt: time spent on the page before voting, navigation path to the contest entry form, mouse movement entropy, scroll behavior, and interaction velocity. A vote from a high-reputation residential IP that also has human-like session behavior scores higher than one from the same IP type with robotic-looking session signals.
The shift toward behavioral scoring reflects the inadequacy of IP-only detection after the proliferation of residential proxy services. If residential IPs became accessible as a commodity, platforms needed additional signal layers. Behavioral analysis — modeling what human voting sessions look like vs automated sessions — became that layer starting around 2021.
Session time is the simplest behavioral signal. A human visiting a contest page typically spends 15–90 seconds reading the page, locating the voting button, and completing the action. An automated session hitting the voting endpoint directly produces a near-zero time-on-page. Platforms log timestamps from page load to form submission and flag sessions outside the human-plausible range.
Navigation path is more sophisticated. Did the session arrive via a link from social media, or did it arrive directly at the voting URL with no referrer? Did the session navigate to the entrant’s profile or content before voting, or did it go directly to the voting form? Real human voters often browse the contest entry before committing a vote. Bot-like sessions skip directly to the submission endpoint.
Mouse movement and scroll entropy require client-side JavaScript instrumentation, which not all contest platforms implement. But those that do — particularly high-stakes platforms with significant prize values — collect mouse trajectory data and scroll event sequences that can distinguish organic interaction from automated filling.
For buyers of IP-diverse votes, the implication is that the delivery mechanism must operate at the full session layer, not just the IP layer. A vote delivered through a session that looks human — arriving on the page, spending realistic time, navigating with human-like behavior — passes behavioral scoring regardless of the IP reputation tier. This is what separates session-context delivery from simple proxy rotation.
See our residential vs datacenter proxy comparison for a detailed breakdown of how different IP types perform across both IP reputation and behavioral detection layers.
What Are the Practical Implications for Anyone Buying Unique IP Votes?
For vote buyers, "unique IP" is a necessary condition but nowhere near sufficient for vote quality. The practical standard in 2026 is unique residential-or-mobile IPs, drawn from diverse ISP ASNs, with no subnet clustering, geographically coherent with the contest's audience, paired with human-like session behavior. Any provider advertising "unique IPs" without specifying residential ASN composition, subnet diversity, and behavioral session delivery is describing the 2019 standard, not the 2026 one.
The questions to ask any provider:
- What percentage of your IPs come from residential ISP ASNs vs datacenter or proxy ASNs?
- How many distinct /24 subnets does your pool draw from?
- What is your average geographic coverage for a US-targeted campaign?
- Do your sessions include time-on-page and navigation behavior, or do you submit directly to the voting endpoint?
- What is your policy on vote replacement for platform audits?
Providers who have real answers to all five are operating at the current technical standard. Providers who answer question 1 with “residential IPs” but can’t answer questions 2 or 4 are likely using residential proxy services without behavioral session layering — a hybrid that worked in 2022 but underperforms on platforms that have adopted behavioral scoring.
Our IP vote service documentation covers our current delivery methodology for clients who want technical specifics before ordering. For the full detection landscape including email and CAPTCHA layers, see our email verification explainer and the captcha detection mechanics article.
The Cloudflare overview of bot detection provides a useful external reference for understanding how professional fraud-detection infrastructure is built — the same principles that contest platforms apply through their vendor integrations.
Frequently Asked Questions
What does unique IP mean in the context of contest voting?
In contest voting, a unique IP means the ballot system records only one vote per distinct IP address. Deduplication occurs at the IP layer so that two votes from the same address count as one. Most platforms implement this as the baseline anti-fraud measure. However, uniqueness is the minimum requirement, not the quality ceiling — platforms apply multiple additional checks beyond simple deduplication.
What is an ASN and why does it matter for contest votes?
An Autonomous System Number (ASN) is an identifier assigned to a network operated by a single entity — an internet service provider, a datacenter operator, a cloud provider. ASNs group related IP ranges under a common organizational identity. Contest platforms and their fraud-detection tools maintain ASN reputation scores based on historical abuse signals. An IP from a high-reputation ISP ASN (Comcast, AT&T, BT) is inherently more trusted than one from a known hosting or proxy ASN.
How do platforms tell if an IP is from a datacenter?
Platforms use commercial IP intelligence databases (MaxMind, IPinfo, Spur.us) that classify IP ranges by type: residential, mobile, datacenter, CDN, VPN, proxy, or Tor exit. Datacenter IP ranges are maintained by cloud providers (AWS, Google Cloud, Azure, Hetzner, DigitalOcean) and are publicly documented in their IP allocation lists. A vote from an AWS us-east-1 IP range is trivially identifiable as non-residential.
Are residential proxy IPs trusted by contest platforms?
Residential proxies occupy a gray zone. The underlying IP belongs to a real ISP subscriber's device — often enrolled in a residential proxy network without full awareness. The IP itself has good ASN reputation. However, advanced IP intelligence tools like Spur.us specifically identify residential proxy pool IP ranges and flag them as 'residential proxy' rather than 'clean residential.' Platforms with Spur.us or similar integration will see these flags.
What geographic distribution is most effective for IP votes?
Geographic distribution should mirror the organic demographic profile of the contest. A US-based radio station listener poll should have IPs predominantly from US residential ISPs, distributed across multiple states proportional to the station's listener base. Random global scatter — votes from 40 different countries with no connection to the contest's audience — is itself a fraud signal. Coherence with the expected audience is more valuable than pure geographic diversity.
Does IPv6 behave differently from IPv4 for contest detection?
IPv6 from consumer ISPs is generally trusted similarly to IPv4 residential addresses — the ISP assignment is visible in the ASN, and home broadband connections using IPv6 look like legitimate consumer traffic. IPv6 from datacenters is classified by the same IP intelligence tools that flag datacenter IPv4. The key distinction is the ASN, not the IP version. A Comcast IPv6 address and a Comcast IPv4 address carry equivalent trust.
What is IP clustering and how does it trigger detection?
IP clustering occurs when multiple votes come from the same /24 or /16 subnet within a short window — suggesting they share an exit node, proxy pool, or network origin. For example, 20 votes arriving from IPs in the 192.168.12.x range within 10 minutes creates an obvious cluster signature. Platforms use subnet analysis to identify these patterns even when individual IPs are technically unique. True IP diversity requires subnet-level spread, not just individual IP uniqueness.
How many unique IPs are realistic for a legitimate campaign?
For a genuinely organic voter pool, the IP diversity tracks roughly 1:1 with unique voters. Most home broadband subscribers have one IPv4 address (sometimes dynamic) — so 500 real voters typically produce 480–500 unique IPs after accounting for shared household devices and NAT. A 500-vote order from a quality provider should similarly show near-1:1 unique IP to vote ratio, drawn from diverse residential and mobile ASNs.
Can platforms detect VPN or proxy use at the vote level?
Yes, with high accuracy. Commercial VPN services maintain relatively small exit IP pools, and IP intelligence databases catalog known VPN exit nodes from providers like NordVPN, ExpressVPN, and Mullvad with high completeness. Consumer VPN exit IPs are flagged as 'VPN' in MaxMind and IPinfo classifications. Datacenter-routed VPNs are doubly flagged: once for the VPN type and once for the hosting ASN.
What is the difference between IP deduplication and behavioral detection?
IP deduplication is a rule-based check: one vote per IP. Behavioral detection is a scoring system that evaluates session signals — time on page, navigation patterns, mouse movement entropy, click velocity — alongside IP reputation. Modern platforms layer both: deduplication filters duplicate IPs, and behavioral scoring evaluates whether each unique IP's session looks human. You can pass deduplication on a datacenter IP and still fail behavioral scoring.
How does mobile IP affect contest vote detection?
Mobile IPs — from carriers like T-Mobile, Verizon, AT&T, or international equivalents — are among the highest-trust IP types for contest platforms. Mobile carrier NAT means multiple users can share a single IP, so platforms that strictly enforce one-vote-per-IP may exclude legitimate mobile voters. Platforms aware of this typically apply looser deduplication rules for known mobile carrier IP ranges, which also means quality mobile IPs are effective delivery vectors.
What happens to votes from flagged IP ranges mid-campaign?
Outcomes vary by platform. Some discard votes from flagged IPs silently — the voter sees a success message but the vote is not counted. Others display an error and reject the submission. Some hold flagged votes in a pending state for manual review. The worst-case scenario is retroactive invalidation: votes are counted initially but removed during a post-campaign audit, which can happen in competitive brackets with disputed results.
Last updated · Verified by Victor Williams