Since yesterday around 13h00 CET, my calls to the Helix API are timeout randomly. I would say maybe 1 in 10 at first, then quickly over a short period 1 in 2
This happens both on URLs with public authentication in App Access Token, and with private authentication with Oauth
I had never had this kind of problem before. The server did not move and neither did the application.
Did I miss something ?
In the following capture, a log traces the errors. Each “Symfony\Component\HttpClient\Exception\TimeoutException” traces a timeout while the task runs every minute
We see here that the moments of error are random.
Theres a few reports of this floating around.
Theres a number of things it could be
- bad route from you to fast.ly
- bad routes on fast.ly internally
- fast.ly’s load balancer doing schnanigans
- misc other issues
- bad luck when hitting the API and a server in the backend is restarting in the pool (or late removed from the pool)
Basically retry the request. The second attempt should clear. Since it’s not constant as you have noticed, so even an instant retry should get the required data.
(Fast.ly sits in front of the Twitch API)
Ok no worries, for now retry “works” the next minute usually, but it’s still a hell of a hassle.
The fact that it started to screw up at a round hour like that makes me think there’s a change somewhere… Can you confirm to me that they are aware that there may be a problem somewhere? Is this problem looked at from their side?
And if it’s fast.ly it’s somewhat out of Twitch’s hands anyway and likely fast.ly already looking at it.
The times of my faults don’t line up with your times, and I have way less faults than you (like <10 total distributed across 2 servers in 2 different geographicals (which also spread the faults into different time blocks)) and I hit the API a similar amnount of time as you I imagine)
So it’s just fast.ly things I guess.
And since it’s not a constant issue. I imagine it’s a Twitch Can’t fix and a fast.ly being weird (course that doesn’t account for any BGP schnanigans or other things that could screw with routes or server load balancers load balancing etc)
Sods law it was Paris Data Center maintaince that was screwing up… or follow up rebalancing on fastly’s network
You say that in your first response, can you give me some link to issues reported with this kind of problem ?
I had one singular issue.
And it’s popped up on the discord sparodically.
I also face same issue from this date , it using two different applications on different networks - on both getting sometimes timeout issue, usually about 20 per day
@Sylvanus did you still have issue with this?