🎯 A customizable, anti-detection cloud browser powered by self-developed Chromium designed for web crawlers and AI Agents.👉Try Now
Back to Blog

How to Use a Proxy with HTTPX in Python for Anonymous Requests

Michael Lee
Michael Lee

Expert Network Defense Engineer

18-Dec-2025
Take a Quick Look

Boost your automation and scraping with Scrapeless Proxies — fast, reliable, and affordable.

HTTPX is a modern, fully featured HTTP client for Python that supports both synchronous and asynchronous requests. When performing web scraping or making numerous API calls, integrating a proxy is a critical step to maintain anonymity and manage request volume. HTTPX makes proxy configuration straightforward, supporting both basic and authenticated setups.

Basic Proxy Configuration in HTTPX

HTTPX allows you to define proxies using a dictionary that maps the protocol (http:// or https://) to the proxy URL. The proxy URL follows the standard format: <PROTOCOL>://<IP_ADDRESS>:<PORT>.

python Copy
import httpx

# Define your proxy settings
proxies = {
    "http://": "http://216.137.184.253:80",
    "https://": "http://216.137.184.253:80"
}

# Make a request with the specified proxy
try:
    r = httpx.get("https://httpbin.io/ip", proxies=proxies)
    print(f"Response IP: {r.json().get('origin')}")
except httpx.ProxyError as e:
    print(f"Proxy connection failed: {e}")

Alternatively, you can configure the proxy when initializing an httpx.Client instance, which is the recommended approach for making multiple requests to the same target, as it reuses the connection [4].

python Copy
import httpx

proxy_url = "http://216.137.184.253:80"

with httpx.Client(proxies=proxy_url) as client:
    r = client.get("https://httpbin.io/ip")
    print(f"Response IP: {r.json().get('origin')}")

Handling Proxy Authentication

For proxies that require a username and password, HTTPX supports embedding the credentials directly into the proxy URL. The format is http://<USERNAME>:<PASSWORD>@<IP_ADDRESS>:<PORT>.

python Copy
import httpx

# Proxy URL with embedded credentials
proxy_url = "http://<YOUR_USERNAME>:<YOUR_PASSWORD>@proxy.scrapeless.com:1337"

with httpx.Client(proxies=proxy_url) as client:
    r = client.get("https://httpbin.io/ip")
    print(f"Response IP: {r.json().get('origin')}")

Implementing Proxy Rotation

To avoid detection and maintain high success rates, you should rotate your proxies. This involves maintaining a list of proxy endpoints and randomly selecting one for each request or session. This is particularly effective when combined with a robust scraping library.

python Copy
import httpx
import random

# List of proxy URLs (e.g., from your Scrapeless dashboard)
proxy_urls = [
    "http://user:pass@proxy1.scrapeless.com:10000",
    "http://user:pass@proxy2.scrapeless.com:10001",
    "http://user:pass@proxy3.scrapeless.com:10002",
]

def make_proxied_request(url):
    # Select a random proxy for the request
    random_proxy = random.choice(proxy_urls)
    
    # Configure the proxy for the client
    proxies = {
        "http://": random_proxy,
        "https://": random_proxy
    }

    try:
        with httpx.Client(proxies=proxies, timeout=10.0) as client:
            response = client.get(url)
            response.raise_for_status()
            return response
    except httpx.RequestError as e:
        print(f"An error occurred while requesting {url} via proxy {random_proxy}: {e}")
        return None

# Example usage
response = make_proxied_request("https://targetwebsite.com/data")
if response:
    print(f"Successfully scraped data with status code: {response.status_code}")

For high-volume, asynchronous scraping with HTTPX, a reliable proxy infrastructure is paramount. Scrapeless Proxies are engineered for performance and stealth, offering a diverse pool of IPs that minimize the risk of being blocked. Their Residential and Static ISP Proxies are particularly effective for Python-based scraping, providing the high trust level needed to access complex targets.

Frequently Asked Questions (FAQ)

Q: Does HTTPX support SOCKS proxies?
A: Yes, HTTPX supports SOCKS proxies. You simply need to specify the SOCKS protocol in the proxy URL, for example: socks5://user:pass@ip:port [5].

Q: What is the benefit of using httpx.Client over simple httpx.get()?
A: Using httpx.Client allows for connection pooling and session management, which is more efficient for making multiple requests. It also allows you to set default parameters, like proxies, for all requests made within that client session.

Q: How do I handle proxy errors in HTTPX?
A: HTTPX raises specific exceptions for network issues. You should wrap your requests in a try...except block and catch httpx.ProxyError or the more general httpx.RequestError to implement retry logic or switch to a different proxy.

At Scrapeless, we only access publicly available data while strictly complying with applicable laws, regulations, and website privacy policies. The content in this blog is for demonstration purposes only and does not involve any illegal or infringing activities. We make no guarantees and disclaim all liability for the use of information from this blog or third-party links. Before engaging in any scraping activities, consult your legal advisor and review the target website's terms of service or obtain the necessary permissions.

Most Popular Articles

Catalogue