Site icon EASY2DIGITAL

Chapter 6: Leverage Easy2Digital APIs and Youtube Key to Scrape View, Comment, and Like Data of More than 50 Videos From Top Ranking Ones

In the previous chapter, I walked you through how to create a youtube API key and scrape videos by search queries via python programming. Basically, the beauty of youtube video scraping is you don’t need to pay any fees and make the research done in minutes and focus more on content creation for your brand and eCommerce store product review, which is a super-efficient way.

But having said that for filtering the better YouTubers, it’s not sufficient for us to select just those fetched videos. On the other hand, we also need to look into the video data, such as views, comments, likes, dislikes, etc. It can help us make a better candidate list. Also, by default Youtube data API v3 allows developers to scrape the top 50 pieces of videos. I would say it might be not enough. It’s because you might miss some secondary-tier YouTubers whose videos might rank in lower positions. But the content is pretty good and engaging.

I haven’t talked about how to fetch the data above. So in this Python Tutorial 6, I’ll continue to use the previous Youtube video scraping python script, and walk you through Youtube data API video data. You can learn how to write the codes to fetch the data. And also you can master how to scrape more than 50 pieces of videos from the search query result.

Table of Contents: Leverage Easy2Digital APIs and Youtube Key to Scrape View, Comment, and Like Data of More than 50 Videos From Top Ranking Ones

Youtube Data API – Video Documentation

First thing first, we always start with this platform API documentation reference. Is it whichever platforms you are going to leverage the APIs, such as WeChat auto-reply, Facebook messenger, etc? Last time, we checked out the search list method, and here I try to check out the video sections.

As we can see from the following JSON structure shows the format of a video resource, the video has  list() to list all videos we are going to fetch data. The data we need such as views, comments, likes, etc is an understandable statistic.

      "statistics": {

    "viewCount": unsigned long,

    "likeCount": unsigned long,

    "dislikeCount": unsigned long,

    "favoriteCount": unsigned long,

    "commentCount": unsigned long

  },.......

We create a variable vid_response, which we pass in the youtube video list method and request the statistic data division. Vid_ids are from the previous search.list (). This is the variable that represents the videos from the search query result.

vid_request = youtube.videos().list(

part=’statistics’,

id=vid_ids

)

If we try to execute the codings, we find it’s working because the data we need is calibrated. The video data are following up the video URL yt_link.

Video Data Sample:

HomeAutomationX

Is the Ring Spotlight Camera Worth Buying?

https://youtu.be/U-06WEwtaSk

{'kind': 'youtube#videoListResponse', 'etag': 'DUSIAmpvqb5UpMMas5J6IiT_RnM', 'items': [{'kind': 'youtube#video', 'etag': 'Rak9ja0G6clMeVEuQlcqhe0IvMc', 'id': 'U-06WEwtaSk', 'statistics': {'viewCount': '136516', 'likeCount': '1214', 'dislikeCount': '232', 'favoriteCount': '0', 'commentCount': '234'}}], 'pageInfo': {'totalResults': 1, 'resultsPerPage': 1}}

How to Fetch Video Performance from Youtube Search Query Result

We can see all data are under items. And the data we are going to fetch are under statistics as we mentioned above. Also, we need to fetch each video id-data, so we create a loop as well. I’m not going to go into details because it’s similar to my previous article. If you’d like to learn how to identify the data location, please check out this article.

Chapter 5 – Build a Youtube Bot to Scrape Trending Videos Using Youtube and Easy2Digital APIs

for item in vid_response['items']:

vid_view = (item['statistics']['viewCount'])

                       print(vid_view)

And some videos might not have comments or likes. So we can use try/except to pass the missing data value, in order to avoid errors in the scraping process

Then, we try to command B and can see the top 50 video performances all show up in the result.

How to Scrape as many videos as possible from the search result

As we know, Youtube data API by default can respond to the max. 50 videos from a search query, however, in fact, the search result video amounts are far more than 50. So first thing first, we need to check if the Youtube search list provides related parameters for us to fetch data by page.

As the screencap shows below, we can find a parameter called page token, which can be used to fetch videos on the pre or next page.

Also, as we need to loop and fetch the whole block and body we create to scrape one page, we need to use the other loop called while in the script of master level.

Syntax and argument:

while <expr>:

    <statement(s)>

<statement(s)> represents the block to be repeatedly executed, often referred to as the body of the loop. This is denoted with indentation, just as in an if statement.

First, we create a variable named next page token, which value is none, because we start from the first page.

Then, we pass in the while function, which is defined as it’s true. Just remember to indent the current ready body and block to create a single page, because it tells Python While is used for looping the whole block. Also, we set the max result = 50 and use the parameter page token, and set the variable next page token as its value.

After a page fetching is done, we need to add a line of code that tells Python to scrape the next page and stop to fetch if the next page doesn’t exist, which can be:

nextPageToken

nextPageToken = search_response.get('nextPageToken')

           if not nextPageToken:

                break

Just remember to keep the CSV write demand within the block because it tells Python to store the new page video data in the file in each loop.

If things are done as instructed, you can generate and pass all video data into the CSV.

For example, I search ‘ring spotlight camera” and grab 271 videos from this search query. So you can imagine, if you set this file in a place where it has connected with your influencer marketing research and recruitment dashboard, the dashboard can automatically show you what new YouTubers appear, and what new better-engaging videos about ring spotlight cameras are released.  I’ll walk you through in the coming Python Tutorial chapters, how to create automatic refresh with external raw data updated.

Leverage Easy2Digital APIs

If you find the script might be complicated and also requires you to update scripts and fix bugs on and off, you can leverage Easy2Digital Youtube Bot API. Here is the token endpoint as follows:

https://www.buyfromlo.com?token=&youtubeKey=&keyword=&pageofResult=

By using this API endpoint, you just need to add the Youtube key, Easy2Digital token, and the keywords related to the video content and total pages of result from the keyword (Max. 5 pages of SERP) you aim to scrape. The scraped result is the same as the one shown above.

For more details regarding Marketing APIs, please check out this page.

Easy2Digital Marketing APIs Documentation

Full Python Script of Youtube Video and Performance Data Scraper

If you would like to have a free Easy2Digital API token and the full version of the Python Script of Youtube Video and Performance Data Scraper, please subscribe to our newsletter by adding the message Chapter 6. We would send you the script immediately to your mailbox.

Contact us

So easy, right? I hope you enjoy reading Chapter 6: Leverage Easy2Digital APIs and Youtube Key to Scrape View, Comment, and Like Data of More than 50 Videos From Top Ranking Ones. If you did, please support us by doing one of the things listed below, because it always helps out our channel.

If you are interested in learning how to save the data in a Google Sheet via API, please check out this article:

Python Tutorial 7: Save Web Scraping Data through Google Sheets API

Exit mobile version