# Make a URL request with the predefined headers above
r = requests.get(page_url, headers=headers)
# Store the request in a json/dictionary object
jsonobj = r.json()
if 'pagination' in jsonobj:
cursor = jsonobj['pagination'].get('cursor')
pagination_arr.append(cursor)
get_pagination_links("https://api.twitch.tv/helix/videos?user_id=31723226&after="+cursor, pagination_arr)
Then i just print the length of the array I have been building which is just a list of all the pagination links.
Here is where it gets weird. The code will run just fine but I get a different number of links each time.
The first time I get like about 30, but if i run it again right away I get 2.
There must be a number of requests you can make in a given time. Do anyone know what the problem is, or some kind of workaround?
I am saying sometimes I get 32 pages and other times i get 4.
I keep getting 32 though if i wait for like 10 minutes, so i think it is time related.
Also the call you are doing there only does a single page and defaults to 20 videos on the page. That is why you get 20 I am trying to get all of the pagination links.
I’m up to 55 pages and still going with 20 records per page.
on a second run it’s being a little sluggish about it, some pages just take longer to return than others. (It jsut got up to 59 pages and still going, just every few pages it goes sluggish)
I wonder if you are getting a “timeout” and not logging the timeout.
So you are getting four pages and the fifth page is timing out and you are not catching the timeout as a timeout?
I ran your code and i was able to confirm your results. I was actually planning to switch to nodejs anyways so I think I am just going to abandon the python code and chalk it up as a learning experience.
THANKS AGAIN!
one more add:
I was able to get my python code to work. It was not timing out, but it was just getting a bad response. So i basically added a while loop that just keeps trying until it gets a good response. Sometimes it just has to try a few times.
s_code = 429
while s_code != 200:
# Make a URL request with the predefined headers above
r = requests.get(page_url, headers=headers, timeout=1)
s_code = r.status_code