Jump to content
McKay Development

Lagg

Member
  • Posts

    8
  • Joined

  • Last visited

Lagg's Achievements

  1. Wanted to poke my head in and confirm that Valve has done something screwy with the login cookies that results in sessions expiring quickly and seemingly randomly. The code I could repro on was not touched for a month or so at least. So I don't believe it is a bug in either my stuff or steam-community. It doesn't appear to be an issue of session conflicts as I have ensured my code only does that initial AuthenticateUser call and then reuses the cookies/request object for community, store and other httpRequest endpoints. So it would seem that something Steam side is deciding they need to be expired ASAP. Calling webLogOn instead of the webauth protobuf that logOn calls automatically seems to be *slightly* more reliable. Not sure what can be done here besides calling webLogOn on a timer. Is this what the client does now or something? Would definitely explain the "Verifying login information" interstitial every other page.
  2. Hi, quick question. Does the page that contains `g_rgAppContextData` have the same throttling rules applied that the inventory feeds (/inventory/X/Y/Z) proper do? In other words does the same eased limit apply to logged in users looking at their own pages? I recall like 8 months ago Valve severely increased the throttling on this page. But I do not remember if it was part of the original throttling increase when things became slow and cache-missey or after they introduced the new feeds. Edit: So I've been testing this in-userscript and it looks like that page will be served back reliably if you're logged in. Results inconclusive when I'm logged out. But it seems to be more lenient than the inventory pages themselves. Because I get the "The request is a duplicate and the action has already occurred in the past, ignored this time (29)" error when the context data page is otherwise fine. Is it actually safe to use getInventoryContexts again? That'd be nice. Edit 2: Forgot to update the thread. Looks like I was a version behind and confusing myself because of the 302 chain I was seeing in-browser/userscript versus just getting Malformed response from the lib. For... Whatever reason it appears the flow in error conditions (private?) is: /profiles/whatever/inventory -> /id/vanity/inventory -> /id/vanity
  3. Yeah going to have to agree with McKay here after my own observations. I would avoid using it in code at all. It's now unreliable. I'm moving my clients over to full relogs. Hoping this isn't part of a larger crypto bug or something with steam.
  4. Yes. That polling is more likely than not one of the roots of the problem. You have X accounts logged in concurrently, and given the above I'm guessing you don't use any kind of self-rate limiting to avoid 15-30 of those bots trying to request the confirmations at the same time if they happen to finish the 60 second loop at the same time. If you log in a bot, start that timer, then log in the next bot, then start that timer. You're only giving a few seconds of actual limiting to your requests and that's where the 429s come from. Thousands of people writing bad code like this is by its definition a DDoS because it fills valve's servers with requests to the extent that it starts missing caches, returning errors or otherwise acting strangely. Binding each bot to a new IP is going to make it work, but good luck fixing the next throttling policy valve implements because of this stupidity. Because having a new IP doesn't change the simple fact that you have bad polling code. It just means you're fooling valve's checks. And they're going to care more about their own systems functioning for their own purposes before they consider the community's projects. Eventually "hurp durp I'll just spam some more from a new IP" isn't going to matter anymore when they finally decide to enact a global captcha policy or something of the like because of exactly the same kind of abuse and rate limit bypassing you're doing. Given that browsers don't work like this and that there is a massive difference between concurrent requests to download assets and concurrent requests to download dynamic pages, that's not a very applicable statement. But okay. Let's pretend for a minute that browsers did work like what you're doing. The closest equivalent I can think of would be a site that started 1 timer X times to pull a different URL through ajax. Does starting 20 (just as an example number) near-concurrent ajax requests seem like normal browser behavior to you or insane spamming? Well, consider Steam Inventory Helper and the fact that there's now a new captcha on the pages it abused. Valve accidentally gave a useful way of sending trades as a byproduct of their new system. Mckay wrote one of the better libs implementing them. I assumed this statement was self-explanatory.
  5. I'm sorry but I have to say something here. Are you kidding me? You realize that "hurp durp I'll just use moar IPs!" is contributing to the ever ensuing DDoS Valve faces because of bad code, and therefore leads to even more draconian throttling policies. Can you not just write responsibly? Better yet, competently? Or is ruining what little useful tools Valve accidentally provides for the community the primary goal? Is it not clear the very obvious cause and effect involved in these recent throttling updates?
  6. Remove the &p= part from the URL in components/inventoryhistory.js as well as the code that tries to capture the pagination data (which will lead to an undefined reference otherwise). This will however result in losing that metadata and only getting the first page unless you want to add logic for after_trade. Naturally this qualifies as a workaround and is probably not as clean as what the upstream fix will look like. But until there's something pushed don't know what else to tell you :/
  7. Essentially it appears `p` in the query string in /inventoryhistory is blacklisted and will unconditionally error out. Just with this hilariously enough. ?foobar22=1 is fine. It looks like they're doing something more or less like primary key-based paging now with after_time and after_trade. Which must be interesting on their end sorting-wise. Also the pagination stat regexps are no longer valid because they apparently no longer exist as such. So fix-wise I guess remove the regexp lines and avoid using the properties. Also remove "&p=" + options.page from inventoryhistory.js. These things appear to be the only breaking changes. No actual markup changed beyond that inner text. Which was nice of them I suppose. Overall an interesting strategy of load reduction on Valve's part. I must say.
  8. Edit: Per the post below, this was backported in from v6.3.0 to v5.12.0 as well. Holy crap I thought there were just two active branches. Hi, so after like 3 months of trying to debug a memory leak (mckay can probably attest to all the posts I've made about this) and indeed even browsing node/v8's code I've finally found out why there's indefinite growth of memory usage even when the asset cache isn't used. A leak which shows up in snapshots as system-level retained objects (i.e. things we *never* control at the interpreter level). As it turns out there is a bug in versions of node prior to 4.4.5 where the handle to a VM context was generated as a global. Meaning no garbage collection. Since v2 made the wise decision of using VM contexts instead of straight evals - this of course triggered that bug. And every time a context was created it was putting every object in that context into a global for all intents and purposes. Here is the relevant change that corrected the issue in later node releases. Hope this PSA can save a few people the hassle and perhaps a few bug reports in the future. The relevant changes are tagged with "contextify": https://github.com/nodejs/node/blob/v4.4.5/CHANGELOG.md
×
×
  • Create New...