DIY PS5 Availability Tracker

DIY PS5 Availability Tracker

Updated on: February 14, 2024

As I mentioned in the How to Get a PS5 at Retail Price article, restock alerts are a useful tool. Platforms like Twitter have people that make public announcements upon restock, such as @LordOfRestocks, @MattSwider, and @CameronRitz. However, running a DIY PS5 availability tracker can give you a distinct advantage if it catches a drop before them.

Randomly refreshing retailers’ websites manually, wistfully hoping a PS5 will suddenly become available, has a depressingly low success rate. But, a bot that constantly checks on your behalf, notifying you when something comes up? Now we’re talking. 

While it is a specific niche application, such a bot is a form of web scraper. There are numerous Free Libraries to Build Your Own Web Scraper available. Python alone has several options. 

With sufficient programming proficiency, the sky’s the limit as to how sophisticated you can make your bot. 

But, as is the case with most things, taking the first step is the hardest. So, I’ll go over the steps needed for a basic scraper that you can use as a foundation to grow from. After all, it would be rather foolish to try riding a unicycle if you haven’t even mastered walking yet.

What You’ll Need to Make Your DIY PS5 Availability Tracker

We’re going to cover the components of a rudimentary scraper that only checks one direct URL on Amazon. Ideally, the explanations for each step should be enough for you to scale upward on your own. 

Keep in mind that Amazon and other retailers constantly update their page elements. What is written here might need to be adjusted in the future. But, the explanations on how to get the necessary elements will help you make the tweaks yourself.

Basic Understanding of Python

The programming language we’ll be using is Python. It’s one of the easier languages to learn, thanks to the fact that it reads practically like pseudocode. As a popular programming language, there are tons of resources available in the form of both tutorials and modules online.

BeautifulSoup

Further in line with the fact that we’re going for ease of entry, we’ll be using the most beginner-friendly Python library for web scraping, BeautifulSoup

If you want to start getting fancy down the line, you may want to look into Scrapy. Scrapy is a more elaborate framework with more tools available, but it is also more complex to use.

Be sure to start the code with:

from bs4 import BeautifulSoup
import requests

Or else your bot won’t know what you’re talking about when you start randomly talking about soup. I’d be pretty confused, too, if someone mentioned chicken stock without any context.

Target URL

A quick Amazon search for a PS5 doesn’t immediately give you the correct link, since it’s perpetually out of stock. The top result ATM is a sponsored post for a ‘renewed’ PS5 at a suspicious $999.

A bit more digging will get you the correct link that, you know, isn’t a scalper. Just as page elements may change over time, the link itself might, too. But, for now, the link is: Playstation 5 Console.

Exact HTML Tags

While BeautifulSoup is capable of many things, it still needs precise instructions. And, unfortunately, Amazon may change its page layout over time, meaning you’d need to update the tags you’re searching for. 

Additionally, different sites will most likely have different tags. You need to have the appropriate sections of code adjusted for the site you are checking at any given time.

With a basic understanding of HTML tags, you can look up the information yourself with a right-click Inspect.

You could also use the Chrome extension SelectorGadget, although I don’t have any personal experience with it.

User-Agents

As per the second tip of the Five Tips for Outsmarting Anti-Scraping Techniques, you need to use user agents to mask your bot’s Digital Fingerprint.

When expanding on this basic foundation of a scraper, you’re going to want to cycle through multiple user agents. For now, we’ll just use one in the sample code, though. 

As for what user agents you’ll want to cycle through, there are several resources online with lists of common ones, such as the Common User-Agent List and WhatIsMyBrowser.

Proxies

Only running your DIY PS5 availability tracker a few intermittent times won’t be enough to trigger an IP block. However, once it’s fleshed out and you have it actively running, you’ll need protective measures to avoid getting banned from Amazon.

Of all the different types of proxies available, the ones best suited for web scraping are Rotating Residential Proxies

TLDR: Rotating means it will be a fresh IP address on every request. Residential IPs mean that, as far as Amazon is concerned, it looks like a bunch of random totally normal people all checking the PS5 page, as opposed to a single bot.

Getting Started on Your DIY PS5 Availability Tracker

Now to start playing with some code segments. I’ll explain each function, rather than just dumping a block of code and calling it a day.

Function: Title

First and foremost; even though this particular scraper is just checking PS5s specifically, it’s good practice to double-check that the bot is actually doing what you’re telling it to.

Have it extract the product information from the site instead of just assuming it’s looking at the right PS5 page. Besides, this sort of function is also useful if you’re checking multiple URLs instead of just one specific one.

BeautifulSoup’s find function will look for the tag with the specified attributes I displayed in the HTML tag section.

title = soup.find("span", attrs={"id":'productTitle'})

Then, let’s make it into a string that we can pull excess spaces out of.

title_value = title.string
title_string = title_value.strip()

Function: Price

The most rudimentary scraper will only check for availability. However, you ought to make sure it isn’t some third-party seller offering units at a markup. By having the bot scrape the price information, it can compare it to a predetermined acceptable price range.

Bots are only as smart as you program them to be. So, you need to account for the fact that it will get an error if an unexpected value is found. Hence the failsafe except condition, so it has something to return when the function is called.

def get_price(soup):

    try:
        price = float(soup.find(id='priceblock_ourprice').get_text().replace('$', '').replace(',', '').strip())
    except:
        price = ‘’

    return price

In Python, number variables are either integer or float. As the prices are in dollars and cents as far as I’m concerned, we’re looking at decimals, hence float.

Function: Availability

This availability check will pull the text from the tagged field. This should pretty consistently be “Currently unavailable.” Otherwise, why are you going through the trouble of setting this all up?

def get_availability(soup):
    try:
        available = soup.find("div", attrs={'id':'availability'})
        available = available.find("span").string.strip()
 
    except AttributeError:
        available = ""  
 
    return available   

Another approach would have been to make it a boolean, using the Python syntax “bool”, which is a true-false variable. That would look something like:

def get_availability(soup):
    try:
        available = soup.find("div", attrs={'id':'availability'})
            if available = “Currently unavailable.”
                isAvailable = bool(False)
            else
                isAvailable = bool(True)

    except AttributeError:
        isAvailable = bool(False)  
 
    return isAvailable

Future code samples will operate under the assumption you’re using the first version, though.

Output

We’re already pretty deep into things. Going over making the scraper part of a Discord bot that sends notification pings could fill a whole article on its own.

Similarly, configuring it to send out email alerts with the library smtplib would merit a full tutorial.

But, some basic outputs a scraper could make would be:

print("Product Title: ", title_string)

or

File.write(f"{title_string},")

For simplicity’s sake, future code assumes that you’re printing the found information. You’ll want a full output when initially testing, regardless of how streamlined you want your final version to be.

Code Main Body

As mentioned earlier, we’ll just use a single User Agent in the sample. Down the line, you’ll want it either cycling through a list or randomly selecting from a pool.

Similarly, the early stages of setting up the DIY PS5 availability tracker don’t need any proxies just yet. But, when your tracker goes live, you’ll definitely want them included.

if __name__ == '__main__':
 
    HEADERS = ({'User-Agent':
                'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.157 Safari/537.36',
                'Accept-Language': 'en-US, en;q=0.5'})

Next up, we’re going to tell it what URL it’s going to visit. If you were checking multiple sites, this is where you’d have it looping through a list or such instead of just going to this single hard-coded URL.

    URL = "https://www.amazon.com/PlayStation-5-Console/dp/B09DFCB66S/"
 

Requests is the foundation for BeautifulSoup to work its magic.

    webpage = requests.get(URL, headers=HEADERS)

    soup = BeautifulSoup(webpage.content, "lxml")

Now that BeautifulSoup has analyzed all of the ingredients of the site, you can start calling the previously made functions.

    print("Product Title: ", get_title(soup))
    print("Product Price: ", get_price(soup))
    print("Availability: ", get_availability(soup))
    print()

Here is where you could set it up to send a notification ping or email if price < 600 and get_availability != “Currently unavailable.”

Conclusion

We’ve merely scraped, heh, the tip of the iceberg. Understanding the components of a web scraper will prepare you for scaling upward, or alternate uses with a little tweaking.

While proxies are necessary to ensure your DIY PS5 availability tracker doesn’t get banned, be wary of The Risks of Using Free Proxies. An economic provider like KocerRoxy will reliably take care of you for only $5 per GB.

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

Are you working with proxies? Become a contributor now! Mail us at [email protected]

Read More Blogs

Tips for US Proxies

Tips for US Proxies

Were you able to refund your money? In this article, I will talk about the tips for US proxies you need to know as a new proxy user.