As I mentioned in the How to Get a PS5 at Retail Price article, restock alerts are a useful tool. Platforms like Twitter have people that make public announcements upon restock, such as @LordOfRestocks, @MattSwider, and @CameronRitz. However, running a DIY PS5 availability tracker can give you a distinct advantage if it catches a drop before them.
Randomly refreshing retailers’ websites manually, wistfully hoping a PS5 will suddenly become available, has a depressingly low success rate. But, a bot that constantly checks on your behalf, notifying you when something comes up? Now we’re talking.
What People Track?
Table of Contents
ToggleMost people aren’t tracking any PS5 anymore. They’re tracking the exact version that matches their budget, region, and patience threshold. That matters because each variant can have a different URL, different buy box behavior, and different false-positive traps.
Common targets worth tracking:
- PS5 Pro / newer SKUs (new listings, faster sell-through)
- Limited editions + special bundles (game bundles, color variants, seasonal drops)
- PlayStation Portal (smaller restocks, quick sellouts)
- Sold by / shipped by conditions to avoid third-party markup
- Price caps (alert only if price is within your acceptable range)
If you’re building this tracker as a foundation, the simplest upgrade is: track multiple URLs (one per model/bundle) and add checks for price + seller so your alert means buyable at retail, not available from a random reseller at 2x.
3 Ways to Track PS5 Stock
| Approach | Best for | Setup time | Tradeoffs |
|---|---|---|---|
| No-code (Distill-style) | Quick alerts with minimal effort | 5–10 minutes | Can get blocked, false positives from page layout changes |
| Semi-code (ready-made scripts / GitHub) | Light customization (multiple URLs, simple rules) | 30–60 minutes | Breaks when markup changes; depends on script quality/maintenance |
| Full DIY (this tutorial) | Maximum control (retailers, regions, price caps, seller filters) | 1–3 hours | Needs ongoing maintenance; anti-bot measures may be required |
Trusted tools for restock alerts (no-code)
If you don’t want to code or you want a backup plan, these tools are widely used for monitoring page changes and price/stock signals:
- Distill Web Monitor: monitors page changes and sends alerts (browser or cloud).
- Visualping: website change detection with alerts (good “set it and forget it”).
- Keepa: Amazon price history charts + price drop alerts (useful to avoid fake “deals”).
- camelcamelcamel: Amazon price tracking and alerts (simple price-watch workflow).
Responsible monitoring: Keep your checks reasonable (don’t hammer retailers), add delays/backoff, and follow each site’s terms and rules. This guide is for educational monitoring and alerting, not for disruptive automation.
What You’ll Need to Make Your DIY PS5 Availability Tracker
We’re going to cover the components of a rudimentary scraper you can scale beyond a single link. Instead of hard-coding one Amazon URL, we’ll use a tiny config (retailer + URL + selectors) and loop through it, so you can track multiple retailers and regions (US, UK, EU) with the same foundation.
Keep in mind that Amazon and other retailers constantly update their page elements. What is written here might need to be adjusted in the future. But, the explanations on how to get the necessary elements will help you make the tweaks yourself.
Basic Understanding of Python
The programming language we’ll be using is Python. It’s one of the easier languages to learn, thanks to the fact that it reads practically like pseudocode. As a popular programming language, there are tons of resources available in the form of both tutorials and modules online.
BeautifulSoup
Further in line with the fact that we’re going for ease of entry, we’ll be using the most beginner-friendly Python library for web scraping, BeautifulSoup.
If you want to start getting fancy down the line, you may want to look into Scrapy. Scrapy is a more elaborate framework with more tools available, but it is also more complex to use.
Be sure to start the code with:
from bs4 import BeautifulSoup
import requests
Or else your bot won’t know what you’re talking about when you start randomly talking about soup. I’d be pretty confused, too, if someone mentioned chicken stock without any context.
Target URL
A quick Amazon search for a PS5 doesn’t always give you the best link to monitor, because results can be a mix of bundles, renewed units, and third-party sellers.
In practice, you’ll want a direct product URL for the exact model (PS5 Pro, bundle, limited edition, Portal, etc.) you’re targeting, then build your checks around availability + price + seller so your tracker alerts you only when the listing is actually worth buying.
Exact HTML Tags
While BeautifulSoup is capable of many things, it still needs precise instructions. And, unfortunately, Amazon may change its page layout over time, meaning you’d need to update the tags you’re searching for.
Additionally, different sites will most likely have different tags. You need to have the appropriate sections of code adjusted for the site you are checking at any given time.
With a basic understanding of HTML tags, you can look up the information yourself with a right-click Inspect.
You could also use the Chrome extension SelectorGadget, although I don’t have any personal experience with it.
Retailer Patterns
Static HTML pages (BeautifulSoup works well): the “Add to Cart / Currently unavailable” text and price are present in the raw HTML you get from requests.get(). You can usually grab what you need with soup.find() or soup.select_one().
JS-rendered pages (BeautifulSoup alone often fails): key data loads after the page renders in the browser. In requests.get(), you’ll see placeholders or missing elements. In these cases you’ll typically need:
- a headless browser like Playwright/Selenium, or
- an alternate endpoint (some sites embed JSON data you can request directly)
Quick test: if View Page Source is missing the price/availability but Inspect shows it, it’s probably JS-rendered.
User-Agents
As per the second tip of the Five Tips for Outsmarting Anti-Scraping Techniques, you need to use user agents to mask your bot’s Digital Fingerprint.
When expanding on this basic foundation of a scraper, you’re going to want to cycle through multiple user agents. For now, we’ll just use one in the sample code, though.
As for what user agents you’ll want to cycle through, there are several resources online with lists of common ones, such as the Common User-Agent List and WhatIsMyBrowser.
Proxies
Only running your DIY PS5 availability tracker a few intermittent times won’t be enough to trigger an IP block. However, once it’s fleshed out and you have it actively running, you’ll need protective measures to avoid getting banned from Amazon.
Of all the different types of proxies available, the ones best suited for web scraping with proxies are Rotating Residential Proxies.
TLDR: Rotating means it will be a fresh IP address on every request. Residential IPs mean that, as far as Amazon is concerned, it looks like a bunch of random totally normal people all checking the PS5 page, as opposed to a single bot.
Also read: Top 5 Best Rotating Residential Proxies
Getting Started on Your DIY PS5 Availability Tracker
Now to start playing with some code segments. I’ll explain each function, rather than just dumping a block of code and calling it a day.
Function: Title
First and foremost; even though this particular scraper is just checking PS5s specifically, it’s good practice to double-check that the bot is actually doing what you’re telling it to.
Have it extract the product information from the site instead of just assuming it’s looking at the right PS5 page. Besides, this sort of function is also useful if you’re checking multiple URLs instead of just one specific one.
BeautifulSoup’s find function will look for the tag with the specified attributes I displayed in the HTML tag section.
title = soup.find("span", attrs={"id":'productTitle'})
Then, let’s make it into a string that we can pull excess spaces out of.
title_value = title.string
title_string = title_value.strip()
Function: Price
The most rudimentary scraper will only check for availability. However, you ought to make sure it isn’t some third-party seller offering units at a markup. By having the bot scrape the price information, it can compare it to a predetermined acceptable price range.
Bots are only as smart as you program them to be. So, you need to account for the fact that it will get an error if an unexpected value is found. Hence the failsafe except condition, so it has something to return when the function is called.
def get_price(soup):
try:
price = float(soup.find(id='priceblock_ourprice').get_text().replace('$', '').replace(',', '').strip())
except:
price = ‘’
return price
In Python, number variables are either integer or float. As the prices are in dollars and cents as far as I’m concerned, we’re looking at decimals, hence float.
Function: Availability
This availability check will pull the text from the tagged field. This should pretty consistently be “Currently unavailable.” Otherwise, why are you going through the trouble of setting this all up?
def get_availability(soup):
try:
available = soup.find("div", attrs={'id':'availability'})
available = available.find("span").string.strip()
except AttributeError:
available = ""
return available
Another approach would have been to make it a boolean, using the Python syntax “bool”, which is a true-false variable. That would look something like:
def get_availability(soup):
try:
available = soup.find("div", attrs={'id':'availability'})
if available = “Currently unavailable.”
isAvailable = bool(False)
else
isAvailable = bool(True)
except AttributeError:
isAvailable = bool(False)
return isAvailable
Future code samples will operate under the assumption you’re using the first version, though.
Output
We’re already pretty deep into things. Going over making the scraper part of a Discord bot that sends notification pings could fill a whole article on its own.
Similarly, configuring it to send email notifications in Python with the library smtplib would merit a full tutorial.
But, some basic outputs a scraper could make would be:
print("Product Title: ", title_string)
or
File.write(f"{title_string},")
For simplicity’s sake, future code assumes that you’re printing the found information. You’ll want a full output when initially testing, regardless of how streamlined you want your final version to be.
Code Main Body
As mentioned earlier, we’ll just use a single User Agent in the sample. Down the line, you’ll want it either cycling through a list or randomly selecting from a pool.
Similarly, the early stages of setting up the DIY PS5 availability tracker don’t need any proxies just yet. But, when your tracker goes live, you’ll definitely want them included.
if __name__ == '__main__':
HEADERS = ({'User-Agent':
'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.157 Safari/537.36',
'Accept-Language': 'en-US, en;q=0.5'})
Next up, we’re going to tell it what URL it’s going to visit. If you were checking multiple sites, this is where you’d have it looping through a list or such instead of just going to this single hard-coded URL.
from bs4 import BeautifulSoup
import requests
def text_or_empty(soup, css):
el = soup.select_one(css)
return el.get_text(strip=True) if el else ""
def price_or_empty(soup, css):
raw = text_or_empty(soup, css)
# Keep it simple: strip currency symbols and commas
cleaned = raw.replace("$", "").replace("£", "").replace("€", "").replace(",", "").strip()
try:
return float(cleaned)
except:
return ""
if __name__ == '__main__':
HEADERS = {
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.157 Safari/537.36",
"Accept-Language": "en-US,en;q=0.5"
}
# Small config: retailer + region + URL + selectors (CSS). Region switch tip (US/UK/EU): treat the same console as different targets by country. Availability, buy box rules, and even page structure can vary by region, so track one URL per region (US vs UK vs EU) and include a region label in your loop so your alerts tell you where the restock actually happened.
TARGETS = [
{
"name": "Amazon US (PS5 Console)",
"region": "US",
"url": "https://www.amazon.com/PlayStation-5-Console/dp/B09DFCB66S/",
"selectors": {
"title": "#productTitle",
"price": "#priceblock_ourprice, #priceblock_dealprice, .a-price .a-offscreen",
"availability": "#availability span"
}
},
# Add more targets like:
# { "name": "Amazon UK (Bundle X)", "region": "UK", "url": "...", "selectors": {...} },
# { "name": "Retailer Y (EU)", "region": "EU", "url": "...", "selectors": {...} },
]
for target in TARGETS:
webpage = requests.get(target["url"], headers=HEADERS)
soup = BeautifulSoup(webpage.content, "lxml")
title = text_or_empty(soup, target["selectors"]["title"])
price = price_or_empty(soup, target["selectors"]["price"])
availability = text_or_empty(soup, target["selectors"]["availability"])
print("Target:", target["name"], "| Region:", target["region"])
print("Product Title:", title)
print("Product Price:", price)
print("Availability:", availability)
print()
Also read: The Risks of Digital Fingerprinting
Conclusion
We’ve merely scraped, heh, the tip of the iceberg. Understanding the components of a web scraper will prepare you for scaling upward, or alternate uses with a little tweaking.
While proxies are necessary to ensure your DIY PS5 availability tracker doesn’t get banned, be wary of The Risks of Using Free Proxies. An economic provider like KocerRoxy will reliably take care of you for only $5 per GB.
FAQs About DIY PS5 Availability Tracker
Q1. What is a PS5 availability tracker?
A PS5 availability tracker is an automated bot that constantly monitors retailer websites for PlayStation 5 stock. Instead of manually refreshing pages, the tracker checks on your behalf and notifies you immediately when consoles become available, giving you a significant advantage over manual checking or public restock alerts.
Q2. How do I get notified when PlayStation portal is in stock?
To get notified when the PlayStation Portal is in stock, you can set up stock alerts with stock checker websites or apps that monitor retailer web pages for new inventory. Here’s how:
- Stock Checker Websites: Use websites like NowInStock, Stock Informer, or HotStock. They track various retailers and send alerts when stock is available. Just create an account and select the PlayStation Portal to get alerts by email, text, or app notifications.
- Retailer Notifications: Go to retailer websites where the PlayStation Portal is sold (like Amazon, Best Buy, or GameStop), and sign up for email notifications or wishlist alerts if available. Many retailers let you know directly when an item is back in stock.
- Browser Extensions: Install a stock-checking extension like Distill.io, which monitors web pages and can send you notifications the moment stock is detected.
Q3. Which PS5 models should I track?
Track specific models matching your budget: PS5 Pro, standard consoles, Digital Edition, limited editions, bundles, and PlayStation Portal. Each variant has different URLs and sellout speeds. Configure your tracker to monitor exact models, price caps, and sold by conditions to avoid third-party markups.
How useful was this post?
Click on a star to rate it!
Average rating 5 / 5. Vote count: 1
No votes so far! Be the first to rate this post.
Tell Us More!
Let us improve this post!
Tell us how we can improve this post?

