🎯 A customizable, anti-detection cloud browser powered by self-developed Chromium designed for web crawlers and AI Agents.👉Try Now
Back to Blog

How to Use Proxies with SeleniumBase: A Complete Guide

Michael Lee
Michael Lee

Expert Network Defense Engineer

21-Nov-2025
Take a Quick Look

Enhance your SeleniumBase tests and web scraping with high-quality proxies for geo-targeting, anonymity, and bypassing anti-bot systems.

SeleniumBase is a powerful Python framework that wraps Selenium WebDriver, providing simplified methods for automated testing and web scraping. While Selenium itself has historically struggled with native proxy support, especially for authenticated proxies, SeleniumBase offers a clean, command-line solution to integrate proxies seamlessly.

Using proxies with SeleniumBase is essential for:

  • Geo-Targeting: Testing or scraping content that is only available in specific geographical locations.
  • Anonymity: Masking the origin of your automated traffic to prevent IP bans.
  • Load Distribution: Spreading high-volume traffic across multiple IP addresses.

This guide will show you how to configure both unauthenticated and authenticated proxies in SeleniumBase and recommend a high-quality proxy provider for your automation needs.

Configuring Proxies in SeleniumBase

SeleniumBase simplifies proxy configuration by allowing you to pass the proxy details directly via a command-line flag when running your tests or scripts.

1. Unauthenticated Proxy

For a simple proxy that does not require a username or password, use the --proxy flag followed by the proxy's URL and port.

Syntax:

bash Copy
--proxy=your_proxy_url:your_proxy_port

Example:

bash Copy
seleniumbase run --proxy=192.168.1.10:8080 my_test.py

2. Authenticated Proxy

High-quality residential and ISP proxies almost always require authentication. SeleniumBase handles this by allowing you to embed the username and password directly into the proxy URL, a common convention for proxy configuration.

Syntax:

bash Copy
--proxy=username:password@proxy_url:proxy_port

Example:

bash Copy
seleniumbase run --proxy=user123:pass456@proxy.scrapeless.com:8000 my_test.py

When SeleniumBase launches the browser (e.g., Chrome or Firefox), it automatically configures the browser's network settings to route all traffic through the specified proxy, including the necessary authentication headers.

Example: Verifying Proxy Connection

To verify that your proxy is working correctly, you can run a simple SeleniumBase script that navigates to an IP check website.

proxy_test.py:

python Copy
from seleniumbase import BaseCase

class ProxyTest(BaseCase):
    def test_proxy_ip(self):
        # Navigate to a site that displays the public IP address
        self.open("https://httpbin.org/ip")
        
        # The page content will show the IP address of the proxy
        ip_info = self.get_text("body")
        print(f"IP Information: {ip_info}")
        
        # You can add assertions here to check if the IP is from the expected geo-location
        self.assert_text("origin", "body") # Check if the IP field is present

Running the Test with an Authenticated Proxy:

bash Copy
seleniumbase run proxy_test.py --proxy=user123:pass456@proxy.scrapeless.com:8000 -s

The output will confirm that the IP address seen by the target website is the proxy's IP, not your local machine's IP.

For robust, large-scale automation with SeleniumBase, the quality of your proxy network is the single most important factor. Low-quality proxies will be quickly detected and blocked, rendering your automation useless.

Scrapeless Proxies offers a superior, high-performance network that is perfectly suited for browser automation tools like SeleniumBase, ensuring your scripts run reliably and without interruption.

Scrapeless offers a worldwide proxy network that includes Residential, Static ISP, Datacenter, and IPv6 proxies, with access to over 90 million IPs and success rates of up to 99.98%. It supports a wide range of use cases — from web scraping and market research [1] to price monitoring, SEO tracking, ad verification, and brand protection — making it ideal for both business and professional data workflows.

Residential Proxies: Best for SeleniumBase

Residential Proxies are the gold standard for browser automation, as they originate from real user devices and are highly trusted by target websites.

Key Features:

  • Automatic proxy rotation
  • 99.98% average success rate
  • Precise geo-targeting (country/city)
  • HTTP/HTTPS/SOCKS5 protocols
  • <0.5s response time
  • Excellent speed and stability
  • Only $1.80/GB

Static ISP Proxies for Account Management

For tasks like account creation or long-term session management where the IP needs to remain consistent, Scrapeless Static ISP Proxies are the perfect choice. They offer the trust of a residential IP with the speed and stability of a Datacenter IP.

Features:

  • Real residential IPs
  • 99.99% uptime
  • High acceptance rates & low ban risk
  • Geo-location targeting
  • HTTP/HTTPS/SOCKS5 protocols

Scrapeless Proxies provides global coverage, transparency, and highly stable performance, making it a stronger and more trustworthy choice than other alternatives — especially for business-critical and professional data applications that rely on the stability of universal scraping [2] and product solutions [3] via browser automation.

Conclusion

Integrating proxies into your SeleniumBase workflow is a simple yet critical step for any serious web automation project. By utilizing the --proxy command-line flag and pairing it with a high-quality, reliable provider like Scrapeless Proxies, you can ensure your scripts are anonymous, geo-flexible, and successful in navigating the complex landscape of modern anti-bot systems.


References

[1] SeleniumBase Documentation: Proxy Support
[2] Selenium WebDriver Documentation
[3] W3C: HTTP/1.1 Method Definitions (GET)
[4] IETF: Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and Routing
[5] W3C WebDriver Specification

At Scrapeless, we only access publicly available data while strictly complying with applicable laws, regulations, and website privacy policies. The content in this blog is for demonstration purposes only and does not involve any illegal or infringing activities. We make no guarantees and disclaim all liability for the use of information from this blog or third-party links. Before engaging in any scraping activities, consult your legal advisor and review the target website's terms of service or obtain the necessary permissions.

Most Popular Articles

Catalogue