🥳Join the Scrapeless Community and Claim Your Free Trial to Access Our Powerful Web Scraping Toolkit!
Back to Blog

Product Research with Google Search API: A Deep Dive into Labubu Using Scrapeless

Emily Chen
Emily Chen

Advanced Data Extraction Specialist

19-Jun-2025

Introduction

Evaluating a new product like Labubu requires more than just browsing a few websites. To conduct meaningful research, especially when making data-driven decisions, you need depth, scale, and structure. That’s where the Scrapeless’s advanced Google Search API comes in.

In this guide, we’ll walk you through how to harness the power of Scrapeless for comprehensive product research—gathering insights on user sentiment, technical specs, pricing, and more.

Why Use Scrapeless for Product Research?

The Scrapeless Google Search API allows you to query Google Search programmatically and return structured results you can act on. Instead of manually opening dozens of tabs, Scrapeless brings everything into a clean JSON format—tailored to your queries.

This is incredibly useful for product research, where you often need to:

  • Compare features across competitors
  • Understand real user experiences
  • Track sentiment across reviews and forums
  • Analyze how a product is positioned in the market

Getting Started

To begin, make sure you have:

  • A valid Scrapeless API Token (you can request one from your dashboard)
  • Python 3.x installed (or another scripting environment)
  • A clear list of search questions related to Labubu

Some examples of product-focused queries include:

  • "Labubu pricing model"
  • "Labubu feature list"
  • "Labubu performance benchmarks"
  • "Labubu API reliability"
  • "Labubu integration experiences"
  • "Labubu alternatives comparison"

Each of these targets a specific facet of product research—whether it’s pricing, tech capabilities, or competitive landscape.

Once you receive search results from Scrapeless, you can:

  • Parse content to extract product details
  • Aggregate patterns across sources
  • Score sentiment from user feedback
  • Track changes over time as new reviews appear

Scrapeless gives you an automated, repeatable process that reduces the manual burden of research while increasing the depth and precision of your insights.

Understanding Product Features and Specifications

Using the Scrapeless Google Search API, consider executing searches like:

  • "Labubu pricing model"
  • "Labubu feature list"
  • "Labubu performance benchmarks"
  • "Labubu API reliability"
  • "Labubu integration experiences"
  • "Labubu alternatives comparison"

These help you evaluate

  • Product capabilities
  • Technical stability
  • Ease of integration for dev teams

You can even automate comparisons by also querying terms like:

  • Labubu vs [competitor name]

This lets you see how Labubu is positioned in the ecosystem.

Gauging User Satisfaction and Sentiment

Specs are important — but so is user feedback.

Scrapeless makes it easy to automate sentiment searches:

  • "Labubu great choice"
  • "Labubu highly recommended"
  • "Labubu user experience"

These queries pull data from

  • Blogs
  • Forums
  • Review sites

You can build a basic sentiment score by tallying up mentions of positive vs negative sentiment terms.

Discovering Expert Opinions and In-Depth Analyses

For more professional insights, try structured searches like:

  • "Labubu expert reviews"
  • "Labubu case study"

These will help uncover

  • Analyst feedback
  • Business implementations
  • Long-term evaluation of product quality

This adds another layer of strategic value beyond customer reviews.

Bonus: Python Script for Scoring Labubu

To make your workflow even more powerful, we’ve included a basic Python script that:

  • Sends queries through the Scrapeless API
  • Counts how many results match positive vs negative sentiment terms
  • Outputs a simple score to gauge public perception

You’ll need:

  • Python 3.x installed – Download here
  • A valid Scrapeless API Token – replace SCRAPELESS_API_TOKEN in the code
  • The requests library – install via terminal: pip install requests
    Once you run the script, it will generate both console output and a text file with your results.

Python Script ( labubu_research_script.py )

language Copy
import json
import requests

SCRAPELESS_API_TOKEN = "SCRAPELESS_API_TOKEN"

def search_google_with_scrapeless(query, gl="us", hl="en", google_domain="google.com", num="10"):
    """Performs a Google search using the Scrapeless API and returns the JSON response."""
    host = "api.scrapeless.com"
    url = f"https://{host}/api/v1/scraper/request"

    headers = {
        "x-api-token": SCRAPELESS_API_TOKEN
    }

    json_payload = json.dumps({
        "actor": "scraper.google.search",
        "input": {
            "q": query,
            "gl": gl,
            "hl": hl,
            "google_domain": google_domain,
            "start": "0",
            "num": num
        }
    })

    try:
        response = requests.post(url, headers=headers, data=json_payload)
        response.raise_for_status()  # Raise an exception for HTTP errors
        return response.json()
    except requests.exceptions.RequestException as e:
        print(f"Error during API request: {e}")
        return None

def get_result_count(query):
    search_results = search_google_with_scrapeless(query, num="1") # Only need 1 result to get total_results
    if search_results and "search_information" in search_results:
        # Scrapeless API might not have spelling_fix, so we rely on total_results
        return int(search_results["search_information"].get("total_results", 0))
    return 0

def negative_queries(product_name):
    return [
        f'"{product_name} broken"',
        f'"{product_name} defective"',
        f'"{product_name} quality issues"',
        f'"{product_name} paint chipping"',
        f'"{product_name} fragile"',
        f'"{product_name} easily damaged"',
        f'"{product_name} not as pictured"',
        f'"{product_name} disappointed with"',
        f'"{product_name} regret buying"',
        f'"{product_name} waste of money"',
    ]

def positive_queries(product_name):
    return [
        f'"{product_name} cute"',
        f'"{product_name} adorable"',
        f'"{product_name} well-made"',
        f'"{product_name} high quality"',
        f'"{product_name} great design"',
        f'"{product_name} perfect gift"',
        f'"{product_name} highly collectible"',
        f'"{product_name} worth the price"',
        f'"{product_name} love my"',
        f'"{product_name} recommended"',
    ]

def conduct_labubu_product_research_with_scoring():
    product_name = "Labubu"

    negative_markers = 0
    positive_markers = 0

    print(f"Searching for product: {product_name}\n")

    print("Negative results found:")
    negative_results_output = []
    for query in negative_queries(product_name):
        count = get_result_count(query)
        if count > 0:
            negative_markers += 1
            negative_results_output.append(f'\"{query}\": {count}')
            print(f'\"{query}\": {count}')
    if not negative_results_output:
        print("none")

    print("\nPositive results found:")
    positive_results_output = []
    for query in positive_queries(product_name):
        count = get_result_count(query)
        if count > 0:
            positive_markers += 1
            positive_results_output.append(f'\"{query}\": {count}')
            print(f'\"{query}\": {count}')
    if not positive_results_output:
        print("none")

    score = positive_markers - negative_markers

    print(f"\nNegative markers: {negative_markers}")
    print(f"Positive markers: {positive_markers}")
    print(f"Score: {score}")

    # Save the output to a file for later inclusion in the article
    with open("labubu_scoring_output.txt", "w", encoding="utf-8") as f:
        f.write(f"Searching for product: {product_name}\n\n")
        f.write("Negative results found:\n")
        if negative_results_output:
            f.write("\n".join(negative_results_output) + "\n")
        else:
            f.write("none\n")
        f.write("\nPositive results found:\n")
        if positive_results_output:
            f.write("\n".join(positive_results_output) + "\n")
        else:
            f.write("none\n")
        f.write(f"\nNegative markers: {negative_markers}\n")
        f.write(f"Positive markers: {positive_markers}\n")
        f.write(f"Score: {score}\n")

if __name__ == "__main__":
    conduct_labubu_product_research_with_scoring()

Running the Script

After setting up your environment and adding your API token, run the script via terminal:

language Copy
python3 labubu_research_script.py

The script will:
  • Loop through all predefined search terms
  • Query the Scrapeless Google Search API
  • Print the number of organic results per term
  • Save results to labubu_search_results.json

Code Overview

Here’s a quick breakdown of the main components:

  • search_google_with_scrapeless()
    Sends a request to the Scrapeless API with a given query.
  • get_result_count()
    Calls the search function and returns the total number of results.
  • positive_queries() / negative_queries()
    Define search terms for identifying user sentiment about Labubu as a toy.
  • conduct_labubu_product_research_with_scoring()
    Main function: runs all queries, calculates sentiment scores, and saves a summary to labubu_scoring_output.txt.

Here's an example of the output generated by running the modified Python script with toy-specific queries:

language Copy
Searching for product: Labubu

Negative results found:
""Labubu broken"": 869
""Labubu defective"": 721
""Labubu not as pictured"": 29700000
""Labubu disappointed with"": 2
""Labubu waste of money"": 5

Positive results found:
""Labubu cute"": 473000
""Labubu adorable"": 44900
""Labubu well-made"": 5
""Labubu high quality"": 17700
""Labubu perfect gift"": 3130
""Labubu worth the price"": 2430
""Labubu love my"": 4570
""Labubu recommended"": 375

Negative markers: 5
Positive markers: 8
Score: 3

In this model, a score of `0 ` means balanced sentiment. A positive score suggests favorable mentions, while a negative score indicates concerns. This basic scoring can be upgraded with:
  • Weighted keyword impact
  • Sentiment analysis of snippets
  • Result categorization

By automating these checks with Scrapeless, you unlock real-time monitoring of product perception—crucial for product teams and marketers.

Final Thoughts

Product research doesn't have to be manual or messy. With Scrapeless, you can:

  • Automate your data gathering
  • Target exactly what you want to know
  • Stay ahead of market perception in real-time

🧪 Start building your own product research pipeline: Scrapeless Google Search API

At Scrapeless, we only access publicly available data while strictly complying with applicable laws, regulations, and website privacy policies. The content in this blog is for demonstration purposes only and does not involve any illegal or infringing activities. We make no guarantees and disclaim all liability for the use of information from this blog or third-party links. Before engaging in any scraping activities, consult your legal advisor and review the target website's terms of service or obtain the necessary permissions.

Most Popular Articles

Catalogue