🥳Join the Scrapeless Community and Claim Your Free Trial to Access Our Powerful Web Scraping Toolkit!
Back to Blog

How to Rotate Proxies in Python

Michael Lee
Michael Lee

Expert Network Defense Engineer

23-Sep-2025

Introduction
Rotating proxies in Python ensures stable, reliable web scraping and automation. Using a single IP can lead to blocks and rate limits. This guide covers 10 practical methods to rotate proxies effectively. For a faster and simpler solution, consider using Scrapeless, which handles proxy rotation automatically.


1. Using a List of Proxies Manually

Conclusion: Manually switching proxies from a list helps avoid detection.

Steps:

  1. Prepare a list of proxies (proxies = ["http://ip:port", ...]).
  2. Use random.choice(proxies) to select a proxy for each request.
  3. Pass the proxy to requests or httpx:
python Copy
import requests, random

proxies = ["http://111.111.111.111:8080", "http://222.222.222.222:8080"]
proxy = random.choice(proxies)
response = requests.get("https://example.com", proxies={"http": proxy, "https": proxy})
print(response.text)

Application: Small-scale scraping or testing multiple endpoints without bans.


2. Using Proxy Rotation with Requests Session

Conclusion: A session object allows consistent proxy usage across multiple requests.

Steps:

  1. Create a requests.Session().
  2. Assign a proxy from your list for each session.
python Copy
import requests, random

session = requests.Session()
proxies = ["http://111.111.111.111:8080", "http://222.222.222.222:8080"]
session.proxies.update({"http": random.choice(proxies), "https": random.choice(proxies)})
response = session.get("https://example.com")
print(response.status_code)

Application: When scraping pages that require persistent sessions.


3. Using Third-Party Proxy Libraries

Conclusion: Libraries like requests-ip-rotator simplify proxy rotation.

Steps:

python Copy
# Example with requests-ip-rotator
from requests_ip_rotator import ApiGatewaySession
session = ApiGatewaySession("https://example.com")
response = session.get("https://example.com")
print(response.text)

Application: Best for APIs or frequent requests requiring IP rotation.


4. Rotating Proxies Using Tor

Conclusion: Tor provides free proxy rotation via circuits.

Steps:

  1. Install Tor (sudo apt install tor) and stem Python library.
  2. Connect through Tor SOCKS proxy (127.0.0.1:9050).
  3. Change IP with Tor commands.
python Copy
import requests

proxies = {"http": "socks5h://127.0.0.1:9050", "https": "socks5h://127.0.0.1:9050"}
response = requests.get("https://example.com", proxies=proxies)
print(response.text)

Application: Scraping anonymous data without paying for proxies.


5. Using Rotating Proxy Providers

Conclusion: Paid providers ensure high reliability and speed.

Steps:

  1. Subscribe to a service like Luminati, ScraperAPI, or Scrapeless.
  2. Use their endpoints with authentication.
  3. Automatically rotate proxies for each request.
python Copy
import requests

api_key = "YOUR_API_KEY"
url = f"http://proxyprovider.com?api_key={api_key}&url=https://example.com"
response = requests.get(url)
print(response.text)

Application: Large-scale scraping projects with minimal downtime.


6. Proxy Rotation in Selenium

Conclusion: Web automation requires rotating proxies to prevent IP bans.

Steps:

python Copy
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
import random

proxies = ["111.111.111.111:8080", "222.222.222.222:8080"]
chrome_options = Options()
chrome_options.add_argument(f'--proxy-server={random.choice(proxies)}')
driver = webdriver.Chrome(options=chrome_options)
driver.get("https://example.com")

Application: Automated browsing and testing with multiple IPs.


7. Rotating Proxies in Scrapy

Conclusion: Scrapy supports middleware for proxy rotation.

Steps:

  1. Enable HttpProxyMiddleware.
  2. Define a list of proxies in settings.py.
  3. Use middleware to rotate per request.
python Copy
DOWNLOADER_MIDDLEWARES = {
    'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 1,
}
PROXY_LIST = ['http://111.111.111.111:8080', 'http://222.222.222.222:8080']

Application: Large-scale scraping with Scrapy spiders.


8. Using Async Proxy Rotation with httpx

Conclusion: Async libraries enable concurrent requests with multiple proxies.

python Copy
import httpx, asyncio, random

proxies = ["http://111.111.111.111:8080", "http://222.222.222.222:8080"]

async def fetch(url):
    proxy = random.choice(proxies)
    async with httpx.AsyncClient(proxies={"http": proxy, "https": proxy}) as client:
        response = await client.get(url)
        print(response.status_code)

asyncio.run(fetch("https://example.com"))

Application: High-speed scraping for multiple URLs.


9. Auto Proxy Rotation with Scrapeless

Conclusion: Scrapeless handles proxy rotation automatically, simplifying development.

Benefits:

  • No manual proxy list management
  • Integrated with Python scraping scripts
  • Supports high concurrency and reliability

Application: Ideal for both beginners and professionals seeking efficiency.


10. Rotating Proxies with Environment Variables

Conclusion: Dynamically change proxy configuration using environment variables.

Steps:

  1. Store proxies in .env file.
  2. Load and randomly select proxies in scripts.
python Copy
import os, random
from dotenv import load_dotenv
import requests

load_dotenv()
proxies = os.getenv("PROXIES").split(",")
proxy = random.choice(proxies)
response = requests.get("https://example.com", proxies={"http": proxy, "https": proxy})

Application: Useful for configurable scraping pipelines.


Comparison Summary

Method Ease of Use Cost Scalability Best Use Case
Manual List Easy Free Low Small projects
Requests Session Easy Free Medium Persistent sessions
Proxy Libraries Medium Free/Paid Medium API calls
Tor Medium Free Low Anonymous scraping
Paid Providers Easy Paid High Large-scale scraping
Selenium Medium Free/Paid Medium Automation tasks
Scrapy Medium Free High Large spider projects
Async httpx Hard Free High High concurrency
Scrapeless Very Easy Paid High All-in-one solution
Env Variables Medium Free Medium Configurable pipelines

Key Takeaways

  • Rotating proxies is essential to prevent blocks and maintain scraping stability.
  • Python supports many methods, from manual lists to async and paid solutions.
  • Scrapeless offers the easiest and most reliable rotation method.

FAQ

Q1: Why should I rotate proxies in Python?
To avoid IP bans, rate limits, and ensure reliable scraping.

Q2: Can I use free proxies for rotation?
Yes, but reliability and speed may be low. Paid services are recommended for large projects.

Q3: Does Scrapeless require coding?
Minimal coding is required; it integrates seamlessly with Python scripts.

Q4: Can I rotate proxies in Selenium?
Yes, by configuring the browser options with different proxy servers.

Q5: How often should I rotate proxies?
It depends on your request volume and target website policies.


For a hassle-free experience, try Scrapeless to automatically handle proxy rotation and focus on data collection instead of manual management.


References

At Scrapeless, we only access publicly available data while strictly complying with applicable laws, regulations, and website privacy policies. The content in this blog is for demonstration purposes only and does not involve any illegal or infringing activities. We make no guarantees and disclaim all liability for the use of information from this blog or third-party links. Before engaging in any scraping activities, consult your legal advisor and review the target website's terms of service or obtain the necessary permissions.

Most Popular Articles

Catalogue