1
10
submitted 15 hours ago by JRepin@lemmy.ml to c/technology@lemmy.ml

Over the past decade, the AI industry has come to exert an unprecedented economic, political and societal power and influence. It is therefore critical that we comprehend the extent and depth of pervasive and multifaceted capture of AI regulation by corporate actors in order to contend and challenge it. In this paper, we first develop a taxonomy of mechanisms enabling capture to provide a comprehensive understanding of the problem. Grounded in design science research (DSR) methodologies and extensive scoping review of existing literature and media reports, our taxonomy of capture consists of 27 mechanisms across five categories. We then develop an annotation template incorporating our taxonomy, and manually annotate and analyse 100 news articles. The purpose behind this analysis is twofold: validate our taxonomy and provide a novel quantification of capture mechanisms and dominant narratives. Our analysis identifies 249 instances of capture mechanisms, often co-occurring with narratives that rationalise such capture. We find that the most recurring categories of mechanisms are Discourse & Epistemic Influence, concerning narrative framing, and Elusion of law, related to violations and contentious interpretations of antitrust, privacy, copyright and labour laws. We further find that Regulation stifles innovation, Red tape and National Interest are the most frequently invoked narratives used to rationalise capture. We emphasize the extent and breadth of regulatory capture by coalescing forces -- Big AI and governments -- as something policy makers and the public ought to treat as an emergency. Finally, we put forward key lessons learned from other industries along with transferable tactics for uncovering, resisting and challenging Big AI capture as well as in envisioning counter narratives.

Full paper: PDF | HTML | TeX source

2
6
submitted 11 hours ago by yogthos@lemmy.ml to c/technology@lemmy.ml
3
5
submitted 10 hours ago by chobeat@lemmy.ml to c/technology@lemmy.ml
4
27
submitted 1 day ago by yogthos@lemmy.ml to c/technology@lemmy.ml
5
22
submitted 2 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
6
1

Most AI companion platforms advertise $9.99 or $12.99 per month. The real monthly cost for an active user is 2-5x that once token systems kick in. One major platform I tested after tracking every transaction for 30 days advertises $12.99 — regular users end up spending $25-60 monthly once image generation and voice tokens are factored in. The subscription price is the floor not the ceiling on most platforms. The ones with genuinely flat pricing where what you see is what you pay are rare. Full breakdown: medium.com/@companaya/i-spent-500-testing-ai-companion-apps-real-monthly-costs-revealed-2026-8a6c0532778d

7
43
submitted 2 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
8
29
submitted 2 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
9
12
10
34
11
27
submitted 3 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
12
18
13
29
submitted 3 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
14
11
submitted 3 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
15
51
submitted 4 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
16
13
submitted 4 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
17
15
18
15
submitted 5 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
19
52
submitted 6 days ago by nutomic@lemmy.ml to c/technology@lemmy.ml
20
33
submitted 6 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
21
6
submitted 5 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
22
19
submitted 6 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
23
26
submitted 6 days ago by chobeat@lemmy.ml to c/technology@lemmy.ml
24
3
submitted 4 days ago by emily@lemdro.id to c/technology@lemmy.ml

Real talk: last month I was running a giveaway campaign for a client. The mechanic was simple — comment to enter, tag a friend for a bonus entry. 3,200 comments later, I was staring at a blank Google Sheet wondering how I was going to verify entries, remove duplicates, and pick a winner without losing my mind. Instagram doesn't give you any export functionality. Zero. You can view comments in the app, you can reply, you can delete — but you cannot export them in any structured way. This is apparently a deliberate product decision, and it's been this way for years. What I tried first:

Manually copy-pasting — obviously not scalable past ~50 rows The official Instagram Graph API — requires app review, business account verification, and only returns data from your own posts anyway Third-party "Instagram data export" services — most of these ask for your password or OAuth credentials, which is a non-starter

What actually worked: I ended up using a browser extension called Instagram Comments Scraper that runs entirely within your browser session. No password required — it just operates within your existing logged-in session, the same way you're already viewing the comments. The data is processed locally and never sent anywhere external. The output columns it gives you: comment ID, comment text, username, profile URL, profile pic URL, and timestamp. That's exactly what you need to do any meaningful analysis — filter by date, spot bot accounts, remove duplicates, identify authentic entries. The rate limiting situation: The part I didn't expect was how Instagram's rate limits work. There's no published threshold — it varies by IP and activity patterns. When the scraper hits a limit, it enters a cooldown mode automatically (the timer shows you how long), then doubles the cooldown if the limit persists. Once the cooldown clears and a request succeeds, it goes back to normal. This meant I could walk away and come back to a finished export rather than babysitting it. End result: 3,200 comments exported to Excel in about 40 minutes of unattended processing. Filtered to valid entries (tagged a user + original commenter had 10+ followers) in another 20 minutes using basic Excel formulas. Caveat I'd add for anyone doing this: Be reasonable about volume and timing. Don't run 10,000-comment scrapes back-to-back on the same IP. The human-like delay system in the tool helps, but bulk scraping in one long session still carries some account risk. Space it out if you're working with large datasets. Anyone else found better approaches to this problem? Especially curious if anyone's had success with the official API for use cases beyond your own posts.

25
26
submitted 6 days ago by chobeat@lemmy.ml to c/technology@lemmy.ml
view more: next ›

Technology

42534 readers
106 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 7 years ago
MODERATORS