Archive link: https://archive.ph/ICJZZ
Until now, the EU has allowed a majority of countries to rely on American big tech companies for communication and storage of sensitive data. For example, many universities across Europe rely on Google or Microsoft for email services, research data storage, and department communication. Similarly, many of them write their research using Microsoft Word, which could be used by these big companies to train their own AI models.
A majority of regular citizens rely on Meta for instant messaging apps (WhatsApp), Facebook, Instagram, but also on X (formerly Twitter), and TikTok. None of these apps are properly regulated even with EU's efforts, leaving people unshielded to other states' attempts at polarization. There is also the problem of mass profiling of users, which is used to supply targeted advertisements and sometimes influence public opinion on certain topics (cough Musk tweaking the Twitter algorithm to promote AfD cough).
The article that I supplied focuses mainly on the aspect of maintaining data privacy when our data is harvested by outside entities. However, this is, in my opinion, a horrible approach. We need to move everything ASAP to open source alternatives, and preferably EU based ones. Some attempts at this have been previously made in Germany, which should give hope to other countries in the EU.
The cost of moving away from Google/Microsoft tech stacks will be a drop in the bucket compared to the wealth that these companies extract from EU. Similarly, offering alternatives to social media like Friendica, Mastodon, Pixelfed, Lemmy, and perhaps PeerTube, would be a huge win against disinformation and propaganda from other countries.
If the recent events are not a catalyst to push everyone away from US software in the EU, I do not know what else will. Do you think that this would be possible at all?
Get llama.cpp and try Qwen3.6-35B-A3B. Just came out and looks good. You'll have to look into optimal settings, as it's a Mixture of Experts (MoE) model with only 3B parameters active. That means that the rest can stay in RAM for quick inference.
You could also try the dense model (Qwen3.5-27B), but that will be significantly slower. Put these in a coding harness like Oh-My-Pi, OpenCode, etc. and see how it fares for your tasks. Should be ok for small tasks, but don't expect Opus / Sonnet 4.6 quality, more like better than Haiku.