They're in a lot of government networks world wide (I visited them a long time ago to discuss some potential cooperation) - they're technically quite sound, and as bonus them being privately owned and headquartered in small Finland is generally seen as reducing the likelihood of backdoors or similar issues due to conflicting state interests.
With lower voltage DC you can only set the house on fire. With high voltage AC you can set the house on fire and electrocute people. In a safety oriented company you'd try to limit the parts of the device carrying 230V (or, more generally: if your device has dangerous bits, you try to keep those bits in as few places as possible, as that limits teh amount of places you need to keep safe). Now obviously this has limits - like the mentioned bed size - but I don't think we're yet at a point where this should overrule safe design principles.
I haven't seen a bambu printer myself yet - but given that the cable is undersized and not protected against side effects from bed movement I'd bet they also skimped on on making everything carrying 230V safe - in which case this is a cheaper design. I'm reasonably confident that a safe 230V heating design for a printer that size would not give you noticeably cost savings over a DC design, if at all.
Apple has low memory behaviour way better optimized than Windows, so running at 8GB will not be as painful as it is with windows - but in the background the OS will constantly shuffle stuff around to avoid running out of memory, which costs performance.
16GB is the bare minimum for computer nowadays - and that applies to macs as well. I'm currently using a 16GB air m1 for some things, and I also regularly run into performance issues due to memory limits without doing heavy stuff.
A small form factor, small high density connector. Most interfaces are not populated, as on the regular pis, but just lead out via the connector, so you can decide what you want to expose on your compute module carrier. It has a gbit ethernet chip on board, and a pcie chip - rpi4 also has pcie, but it is hooked up to USB3 there. With the compute module you can decide what you want to do with that.
A good starting point for a wikipedia rabbit hole covering the software aspects on how to drive a display: https://en.wikipedia.org/wiki/XFree86_Modeline
50 pin centronics should be bulkier in all dimensions
The problems come when the woman turns around and keeps closing the notebook with her breasts.
That has changed over the last few years - I'd prefer a proper usb3 to sata bridge over a shitty sata controller - and the quality of integrated sata controllers isn't that great nowadays.
The encryption tech in many cloud providers is typically superior to what you run at home to the point I don’t believe it is a common attack vector.
They rely on hardware functionality in Epyc or Xeon CPUs for their stuff - I have the same hardware at home, and don't use that functionality as it has massive problems. What I do have at home is smartcard based key storage for all my private keys - keys can't be extracted from there, and the only outside copy is a passphrase encrypted based64 printout on paper in a sealed envelope in a safe place. Cloud operators will tell you they can also do the equivalent - but they're lying about that.
And the homomorphic encryption thing they're trying to sell is just stupid.
Overall, hardened containers are more secure vs bare metal as the attack vectors are radically diff.
Assuming you put the same single application on bare metal the attack vectors are pretty much the same - but anybody sensible stopped doing that over a decade ago as hardware became just too powerful to justify that. So I assume nowadays anything hosted at home involves some form of container runtime or virtualization (or if not whoever is running it should reconsider their life choices).
My point is that it is simpler imo to button up a virtual env and that includes a virtual network env
Just like the container thing above, pretty much any deployment nowadays (even just simple low powered systems coming close to the old bare metal days) will contain at least some level of virtual networking. Traditionally we were binding everything to either localhost or world, and then going from there - but nowadays even for a simple setup it's way more sensible to have only something like a nginx container with a public IP, and all services isolated in separate containers with various host only network bridges.
Well with bare metal yes, but when your architecture is virtual, configuration rises in importance as the first line of defense
You'll have all the virtualization management functions in a separate, properly secured management VLAN with limited access. So the exposed attack surface (unless you're selling VM containers) is pretty much the same as on bare metal: Somebody would need to exploit application or OS issues, and then in a second stage break out of the virtualization. This has the potential to cause more damage than small applications on bare metal - and if you don't have fail over the impact of rebooting the underlying system after applying patches is more severe.
On the other hand, already for many years - and way before container stuff was mature - hardware was too powerful for just running a single application, so it was common to have lots of unrelated stuff there, which is a maintenance nightmare. Just having that split up into lots of containers probably brings more security enhancements than the risk of having to patch your container runtime.
Encryption is interesting, there really is no practical difference between cloud vs self hosted encryption offerings other than an emotional response.
Most of the encryption features advertised for cloud are marketing bullshit.
"Homomorphic encryption" as a concept just screams "side channel attacks" - and indeed as soon as a team properly looked at it they published a side channel paper.
For pretty much all the technologies advertised from both AMD and intel to solve the various problems of trying to make people trust untrustworthy infrastructure with their private keys sidechannel attacks or other vulnerabilities exist.
As soon as you upload a private key into a cloud system you lost control over it, no matter what their marketing department will tell you. Self hosted you can properly secure your keys in audited hardware storage, preventing key extraction.
Regarding security issues, it will depend on the provider but one wonders if those are real or imagined issues?
Just look at the Microsoft certificate issue I've mentioned - data was compromised because of that, they tried to deny the claim, and it was only possible to show that the problem exists because some US agencies paid extra for receiving error logs. Microsofts solution to keep you calm? "Just pay extra as well so you can also audit our logs to see if we lose another key"
That's probably the "Vellamo" from Rhea Lines.
I haven't really used bookmarks for probably close to two decades, for various reasons.
Keeping them synchronized always was a pain, and that was before you got into multiple browsers. That part at least is better now.
Then the interfaces to manage them sucked - I did try a bit back then to manage them externally, but the storage formats also were stupid.
And then I seemed to have reached the number of bookmarks the browsers no longer were able to handle (presumably due to the shitty way they were storing them), and adding or editing bookmarks always included several seconds between clicks to wait for the browser to react.
Pretty much everything apart from the first point is still true for the built in bookmark managers.