🎯 A customizable, anti-detection cloud browser powered by self-developed Chromium designed for web crawlers and AI Agents.👉Try Now
Back to Blog

How to Use Proxies with node-fetch: A Complete Guide

Michael Lee
Michael Lee

Expert Network Defense Engineer

28-Nov-2025
Take a Quick Look

Master the art of configuring proxies in Node.js using the popular `node-fetch` library for anonymous and efficient web scraping.

The node-fetch library is a popular choice for making HTTP requests in Node.js, providing a familiar fetch API similar to what is available in modern web browsers. For tasks like web scraping, geo-targeting, or bypassing rate limits, integrating a proxy is essential.

However, unlike some other HTTP clients, node-fetch does not natively support proxy configuration. This guide will walk you through the necessary steps and tools to successfully integrate proxies with node-fetch for both HTTP/HTTPS and SOCKS protocols.

Prerequisites

Before starting, ensure you have a Node.js environment set up and the following packages installed:

  1. node-fetch: The primary HTTP client.
  2. https-proxy-agent: Used for connecting to HTTP/HTTPS proxies.
  3. socks-proxy-agent: Used for connecting to SOCKS proxies.

You can install them using npm:

bash Copy
npm install node-fetch https-proxy-agent socks-proxy-agent

1. Using HTTP/HTTPS Proxies with node-fetch

To use an HTTP or HTTPS proxy, you need to leverage the https-proxy-agent package. This package creates an Agent object that node-fetch can use to route its requests through the specified proxy.

Step 1: Import Necessary Modules

javascript Copy
import fetch from 'node-fetch';
import { HttpsProxyAgent } from 'https-proxy-agent';

Step 2: Define the Proxy URL

The proxy URL should be in the format: http://[username:password@]host:port.

javascript Copy
// Replace with your actual proxy details
const proxyUrl = 'http://username:password@proxy.scrapeless.com:8000'; 

Step 3: Create the Agent

Instantiate the HttpsProxyAgent with your proxy URL.

javascript Copy
const agent = new HttpsProxyAgent(proxyUrl);

Step 4: Make the Request

Pass the created agent object in the fetch options.

javascript Copy
const targetUrl = 'https://example.com/data';

fetch(targetUrl, { agent })
  .then(response => {
    console.log(`Status: ${response.status}`);
    return response.text();
  })
  .then(text => console.log(text.substring(0, 200) + '...'))
  .catch(error => console.error('Fetch error:', error));

This method ensures that all traffic for this specific fetch call is routed through your proxy, providing the necessary anonymity and geo-targeting capabilities.

2. Using SOCKS Proxies with node-fetch

SOCKS proxies (SOCKS4 and SOCKS5) are often preferred for their ability to handle all types of traffic (not just HTTP) and their higher level of anonymity. To use them with node-fetch, you need the socks-proxy-agent package.

Step 1: Import Necessary Modules

javascript Copy
import fetch from 'node-fetch';
import { SocksProxyAgent } from 'socks-proxy-agent';

Step 2: Define the SOCKS Proxy URL

The SOCKS proxy URL should start with socks:// or socks5://.

javascript Copy
// Replace with your actual SOCKS5 proxy details
const socksProxyUrl = 'socks5://username:password@proxy.scrapeless.com:1080';

Step 3: Create the Agent

Instantiate the SocksProxyAgent with your SOCKS proxy URL.

javascript Copy
const socksAgent = new SocksProxyAgent(socksProxyUrl);

Step 4: Make the Request

Pass the socksAgent in the fetch options.

javascript Copy
const targetUrl = 'https://example.com/data';

fetch(targetUrl, { agent: socksAgent })
  .then(response => {
    console.log(`Status: ${response.status}`);
    return response.text();
  })
  .then(text => console.log(text.substring(0, 200) + '...'))
  .catch(error => console.error('Fetch error:', error));

For developers and enterprises relying on Node.js for web scraping and data collection, the quality of the proxy network is paramount. Scrapeless Proxies offers a robust, high-performance network that is perfectly suited for integration with node-fetch and its proxy agents.

Scrapeless offers a worldwide proxy network that includes Residential, Static ISP, Datacenter, and IPv6 proxies, with access to over 90 million IPs and success rates of up to 99.98%. It supports a wide range of use cases — from web scraping and market research [1] to price monitoring, SEO tracking, ad verification, and brand protection — making it ideal for both business and professional data workflows.

Scrapeless Proxies: Key Features for Node.js Developers

Scrapeless's network is optimized for the high concurrency and reliability required by Node.js applications:

  • Residential Proxies: Over 90 million real residential IPs, perfect for high-anonymity scraping.
  • Datacenter Proxies: High-performance IPs optimized for large-scale automation and massive concurrency.
  • Protocol Support: Full support for HTTP, HTTPS, and SOCKS5, ensuring seamless integration with both https-proxy-agent and socks-proxy-agent.
  • High Success Rate: A 99.98% average success rate minimizes the need for complex error handling and retries in your Node.js code.

Scrapeless Proxies provides global coverage, transparency, and highly stable performance, making it a stronger and more trustworthy choice than other alternatives — especially for business-critical and professional data applications that require reliable universal scraping [2] and product solutions [3] with Node.js.

Conclusion

Integrating proxies with node-fetch requires the use of specialized agent libraries like https-proxy-agent and socks-proxy-agent. By correctly configuring these agents with a high-quality proxy provider like Scrapeless Proxies, you can ensure your Node.js applications perform web requests with the anonymity, speed, and reliability necessary for successful data acquisition.


References

[1] Node.js Documentation: Class: http.Agent
[2] npm: node-fetch
[3] npm: https-proxy-agent
[4] npm: socks-proxy-agent
[5] IETF RFC 1928: SOCKS Protocol Version 5

At Scrapeless, we only access publicly available data while strictly complying with applicable laws, regulations, and website privacy policies. The content in this blog is for demonstration purposes only and does not involve any illegal or infringing activities. We make no guarantees and disclaim all liability for the use of information from this blog or third-party links. Before engaging in any scraping activities, consult your legal advisor and review the target website's terms of service or obtain the necessary permissions.

Most Popular Articles

Catalogue